text
stringlengths
11
320k
source
stringlengths
26
161
In mathematics, a recurrent word or sequence is an infinite word over a finite alphabet in which every factor occurs infinitely many times. [ 1 ] [ 2 ] [ 3 ] An infinite word is recurrent if and only if it is a sesquipower . [ 4 ] [ 5 ] A uniformly recurrent word is a recurrent word in which for any given factor X in the sequence, there is some length n X (often much longer than the length of X ) such that X appears in every block of length n X . [ 1 ] [ 6 ] [ 7 ] The terms minimal sequence [ 8 ] and almost periodic sequence (Muchnik, Semenov, Ushakov 2003) are also used. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Recurrent_word
Recursive economics is a branch of modern economics based on a paradigm of individuals making a series of two-period optimization decisions over time. The neoclassical model assumes a one-period utility maximization for a consumer and one-period profit maximization by a producer. The adjustment that occurs within that single time period is a subject of considerable debate within the field, and is often left unspecified. A time-series path in the neoclassical model is a series of these one-period utility maximizations. In contrast, a recursive model involves two or more periods, in which the consumer or producer trades off benefits and costs across the two time periods. This trade-off is sometimes represented in what is called an Euler equation. A time-series path in the recursive model is the result of a series of these two-period decisions. In the neoclassical model, the consumer or producer maximizes utility (or profits). In the recursive model, the subject maximizes value or welfare, which is the sum of current rewards or benefits and discounted future expected value. The field is sometimes called recursive because the decisions can be represented by equations that can be transformed into a single functional equation sometimes called a Bellman equation . This equation relates the benefits or rewards that can be obtained in the current time period to the discounted value that is expected in the next period. The dynamics of recursive models can sometimes also be studied as differential equations [ citation needed ] The recursive paradigm originated in control theory with the invention of dynamic programming by the American mathematician Richard E. Bellman in the 1950s. Bellman described possible applications of the method in a variety of fields, including Economics, in the introduction to his 1957 book. [ 1 ] Stuart Dreyfus , David Blackwell , and Ronald A. Howard all made major contributions to the approach in the 1960s. In addition, some scholars also cite the Kalman filter invented by Rudolf E. Kálmán and the theory of the maximum formulated by Lev Semenovich Pontryagin as forerunners of the recursive approach in economics. Some scholars point to Martin Beckmann and Richard Muth [ 2 ] as the first application of an explicit recursive equation in economics. However, probably the earliest celebrated economic application of recursive economics was Robert Merton 's seminal 1973 article on the intertemporal capital asset pricing model . [ 3 ] (See also Merton's portfolio problem ). Merton's theoretical model, one in which investors chose between income today and future income or capital gains, has a recursive formulation. Nancy Stokey , Robert Lucas Jr. and Edward Prescott describe stochastic and non-stochastic dynamic programming in considerable detail, giving many examples of how to employ dynamic programming to solve problems in economic theory. [ 4 ] This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal economic growth , resource extraction , principal–agent problems , public finance , business investment , asset pricing , factor supply, and industrial organization . The approach gained further notice in macroeconomics from the extensive exposition by Lars Ljungqvist and Thomas Sargent . [ 5 ] This book describes recursive models applied to theoretical questions in monetary policy , fiscal policy , taxation , economic growth , search theory , and labor economics . In investment and finance, Avinash Dixit and Robert Pindyck showed the value of the method for thinking about capital budgeting , in particular showing how it was theoretically superior to the standard neoclassical investment rule. [ 6 ] Patrick Anderson adapted the method to the valuation of operating and start-up businesses [ 7 ] [ 8 ] and to the estimation of the aggregate value of privately held businesses in the US. [ 9 ] There are serious computational issues that have hampered the adoption of recursive techniques in practice, many of which originate in the curse of dimensionality first identified by Richard Bellman. Applied recursive methods, and discussion of the underlying theory and the difficulties, are presented in Mario Miranda & Paul Fackler (2002), [ 10 ] Meyn (2007) [ 11 ] Powell (2011) [ 12 ] and Bertsekas (2005). [ 13 ]
https://en.wikipedia.org/wiki/Recursive_economics
Recursive indexing is an algorithm used to represent large numeric values using members of a relatively small set . Recursive indexing writes the successive differences of the number after extracting the maximum value of the alphabet set from the number, and continuing recursively till the difference falls in the range of the set. Recursive indexing with a 2-letter alphabet is called unary code . To encode a number N , keep reducing the maximum element of this set ( S max ) from N and output S max for each such difference, stopping when the number lies in the half closed half open range [0 – S max ). Let S = [0 1 2 3 4 … 10], be an 11-element set, and we have to recursively index the value N=49. According to this method, subtract 10 from 49 and iterate until the difference is a number in the 0–10 range. The values are 10 ( N = 49 – 10 = 39), 10 ( N = 39 – 10 = 29), 10 ( N = 29 – 10 = 19), 10 ( N = 19 – 10 = 9), 9. The recursively indexed sequence for N = 49 with set S , is 10, 10, 10, 10, 9. Compute the sum of the index values. Decoding the above example involves  10 + 10 + 10 + 10 + 9 = 49. This technique is most commonly used in run-length encoding systems to encode longer runs than the alphabet sizes permit.
https://en.wikipedia.org/wiki/Recursive_indexing
Red-short , hot-short refers to brittleness of steels at red-hot temperatures. It is often caused by high sulfur levels, in which case it is also known as sulfur embrittlement . Iron or steel , when heated to above 460 °C (900 °F), glows with a red color. The color of heated iron changes predictably (due to black-body radiation ) from dull red through orange and yellow to white, and can be a useful indicator of its temperature. Good quality iron or steel at and above this temperature becomes increasingly malleable and plastic. Red-short iron or steel, on the other hand, becomes crumbly and brittle. In steel contaminated by sulfur this embrittlement happens due to the sulfur forming iron sulfide/iron mixtures in the grain boundaries of the metal which have a lower melting point than the steel. [ 1 ] When the steel is heated up and worked, the mechanical energy added to the workpiece increases the temperature further. The iron sulfide (FeS) or iron/iron sulfide alloy (which has an even lower melting point) [ 2 ] begins to melt, and the steel starts to separate at the grain boundaries. Steelmakers add manganese (Mn) to the steel when it is produced, to form manganese sulfide (MnS). Manganese sulfide inclusions have a higher melting point and do not concentrate at the grain boundaries. Thus, when the steel is later heated up and worked, the melting at the grain boundaries does not occur. When steel has elevated levels of copper, which may happen due to ore specifics but is more often cased by contamination of steel scrap , also exhibits hot-shortness. [ 3 ] This is caused by selective oxidation of iron at the grain boundaries, where more noble and softer copper is enriched. [ 4 ] This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Red-short_carbon_steel
The red-suffusion rose-faced lovebird ( Agapornis roseicollis ), also known as the red-pied lovebird , is not a true colour mutation of lovebird species. Many breeders believe it is due to a health issue, most likely dealing with the bird's liver. Some think the red-pied has some genetic relations with the Lutino rosy-faced lovebird mutation , as many cases of red spots appear in Lutino lovebirds. Although many breeders of parrots have claimed that this is a genetic mutation , no one has been able to successfully reproduce it through a series of generations. [ 1 ]
https://en.wikipedia.org/wiki/Red-suffusion_rosy-faced_lovebird_mutation
The Red Barn Observatory was established in 2006 and is dedicated to follow-up observations and detections of asteroids , comets , and Near-Earth objects . Plans for the observatory began in 2002 and construction was completed in 2005. During the month of August 2006, the observatory code H68 was assigned by the Minor Planet Center . Currently, the observatory is of the "roll-off" roof type, but plans are in the works to install an 8-foot dome in the summer of 2007. The observatory is located in Ty Ty, Georgia , USA – well away from any city light pollution and is in an excellent location to perform the follow-up observations of near-Earth objects and potentially hazardous asteroids that are near the vicinity of Earth on a regular basis. Also performed in the observatory is an early evening sky survey (such as Palomar sky survey or NEAT – Near-Earth Asteroid Tracking ) to search for new comets and/or other unknown objects low on the horizon that can be easily overlooked due to the position of the object. Most amateur discovered comets are found in this location. Future plans for the observatory include an amateur based asteroid study program that will allow the "amateur astrometrist" on-line access to observatory images and there they will be able to perform astrometry on all detected asteroids or comets. Established in July 2007, the Georgia Fireball Network began monitoring the skies for bright meteors and fireballs. Currently, there are two stations in the Georgia Fireball Network. Station 1 is located in Buena Vista at the Deer Run Observatory and Station 2 is located in Ty Ty, Georgia at the Red Barn Observatory. Together, the stations monitor skies over most of Georgia and parts of Florida and Alabama. Observer/Owner Steve E. Farmer Jr.
https://en.wikipedia.org/wiki/Red_Barn_Observatory
The Red Color ( Hebrew : צבע אדום , transl. : Tzeva Adom, i.e. code red ) is an early-warning radar system originally installed by the Israel Defense Forces in several towns surrounding the Gaza Strip to warn civilians of imminent attack by rockets (usually Qassam rockets ). [ 1 ] Outside of areas originally serviced by the Red Color system, standard air raid sirens were used to warn of rocket attacks. [ citation needed ] The system originally operated in areas around the so-called Gaza envelope, including in Sderot . [ citation needed ] When the signature of a rocket launch is detected, the system automatically activates the public broadcast warning system in nearby Israeli communities and military bases. A recorded female voice, intoning the Hebrew words for Red Color ("Tzeva Adom"), is broadcast 4 times. [ 2 ] The entire program is repeated until all rockets have impacted and no further launches are detected. The system was installed in Ashkelon between July 2005 and April 2006. Up to June 2006, the announcement was called Red Dawn ( Hebrew : שחר אדום , transl. : Shakhar Adom) but it was changed to the Hebrew words for Red Color ( Hebrew : צבע אדום , transl.: Tzeva Adom) due to a complaint made by a 7-year-old girl named Shakhar (Hebrew for dawn). [ citation needed ] It was the subject of a documentary , which focused on how children are to cope with an alert, [ 3 ] directed by Yoav Shoam. Since 2014, alerts have been available on an iPhone application from the App Store . It was the most downloaded app in Israel in July 2014 during Operation Protective Edge . [ 4 ] Users can select to receive alerts for rocket attacks nationwide, or only in their districts. In October 2023 it was reported that a version of the system modified to suit Ukraine "would start working in Kyiv soon" to warn against the Russian missile strikes of the Russian invasion of Ukraine . [ 5 ]
https://en.wikipedia.org/wiki/Red_Color
Red Data Book of the Republic of Bulgaria ( Bulgarian : Червена книга на Република България ) consists of detailed publications that catalog the status of endangered and threatened species in the country. These books play an important role in the conservation of biodiversity by identifying species at risk of extinction and documenting their current status. They provide thorough information on various species, as well as their habitats , threats , and the conservation measures needed to protect them. Additionally, they serve as educational resources, raising awareness about the importance of protecting Bulgaria's biodiversity. Compiled by scientists, researchers, and conservationists, the Red Data Books are used by environmental organizations, policymakers, and academics to support and implement conservation initiatives. [ 1 ] [ 2 ] The history of the Red Books can be traced back to the International Union for Conservation of Nature (IUCN) , which conceived the idea in the 1960s. The Red Data Book was created as a comprehensive tool to assess and document species at risk of extinction. It aimed to provide detailed information on species’ populations, distribution, and the threats they faced. Initially, the Red Data Book focused on global species, serving as a resource for identifying conservation priorities and guiding protective measures. Over time, the concept expanded to include national versions, focusing on the specific needs and conditions of individual countries. These national Red Data Books adapted the global framework to local contexts, offering understandings of regional species and their conservation status. [ 3 ]
https://en.wikipedia.org/wiki/Red_Data_Book_of_the_Republic_of_Bulgaria
Red Hat Gluster Storage , formerly Red Hat Storage Server, is a computer storage product from Red Hat . It is based on open source technologies such as GlusterFS and Red Hat Enterprise Linux . [ 2 ] The latest release, RHGS 3.5, combines Red Hat Enterprise Linux (RHEL 8 and also RHEL 7) with the latest GlusterFS community release, oVirt , and XFS File System. [ 3 ] [ 4 ] In April 2014, Red Hat re-branded GlusterFS-based Red Hat Storage Server to "Red Hat Gluster Storage". [ 5 ] Red Hat Gluster Storage, a scale-out NAS product, uses as its basis GlusterFS , a distributed file-system. Red Hat Gluster Storage also exemplifies software-defined storage (SDS). In June 2012, Red Hat Gluster Storage was announced as a commercially supported integration of GlusterFS with Red Hat Enterprise Linux . [ 6 ] In 2022, it was announced Red Hat Gluster Storage version 3.5 will be the final version and this particular commercial offering will reach end-of-life at the end of 2024. [ 7 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Red_Hat_Gluster_Storage
Red Leaf Resources, Inc , doing business as Green Leaf Carbon Technologies, is an oil-shale technology company based in Salt Lake City , Utah , United States. It is the developer of the shale oil extraction technology HCCO ® (Homogeneous Charged Continuous Oxidation) Process. The company holds mineral leases in Utah for oil shale development that can support 75,000 bbl/day of oil production. Its Uintah Partners LLC subsidiary also holds surface rights in the Uinta Basin with a permit for a 40,000 barrel per day refinery. The company is affiliated with Questerre Energy Corporation . [ 1 ] The Green Leaf Carbon Technologies process for converting organic-rich material (primarily kerogen) to synthetic oil and gas is called the HCCO ® Process. The process retorts mined kerogen ore in a closed steel vessel. Heat is moved continuously through the stationary bed with a fuel gas that is produced along with oil from the system. Heat is recovered by exposing the leading edge of the heat front to fresh ore. When all the oil is recovered from one vessel and the spent ore has been cooled, the vessel is offloaded and reloaded with fresh kerogen ore. Typically, three vessels are grouped together in a pod with two vessels involved in the heat movement and oil production while the third vessel is on turnaround. By using pure oxygen with the fuel gas, essentially pure CO 2 is produced which is directly amenable to geologic sequestration or for use (such as for enhanced oil recovery operations). Sufficient fuel gas is produced by the process to satisfy the heat requirements of the retort and to generate sufficient electricity for the process requirements, thereby eliminating the need for any external fuel sources. The process promises production of ‘blue oil’, that is, oil that has been produced with zero CO 2 emissions. Green Leaf Carbon Technologies controls oil shale leases covering about 16,000 acres (65 km 2 ) on State of Utah School and Institutional Trust Lands in eastern Utah. The acreage represents about 1.1 billion barrels (170 million cubic metres) of shale oil . Its 2009 pilot project produced more than 300 barrels (48 m 3 ) of oil. [ 2 ] In 2012, the company formed a joint venture with Total S.A. to launch commercial scale production of 9,800 barrels per day (1,560 m 3 /d), utilizing the company’s first-generation EcoShale In-Capsule process on its oil shale leases. [ 3 ] In December 2013, a ground water permit was issued for the oil shale mine and shale oil plant. [ 4 ] [ 5 ] With the collapse of oil prices in 2014, the company undertook a process to redesign its shale oil extraction technology to lower its capital and operating costs and improve its environmental footprint. The HCCO ® Process is the culmination of those efforts and Green Leaf Carbon Technologies is now pursuing commercial demonstration of the technology. Michael Binnion is the Chairman of the Board of Directors. Jason D’Silva is a director and the Chair of the Audit Committee. Patrick Quinlan is a director and the Chair of the Compensation Committee.
https://en.wikipedia.org/wiki/Red_Leaf_Resources
The Red Line , also referred to as the veterinary cordon fence , is a pest-exclusion fence separating northern Namibia from the central and southern regions. It encases several northern regions: Oshana Region , Kavango East Region , Omusati Region , Zambezi Region , Omaheke Region , Kunene Region , and parts of the Khomas and Oshikoto Regions . [ 1 ] Most of these farms are fenced in and are accessible by constructed farm roads. South of the fence today are commercial farms where the farmers, many of whom are white , own the land. North of the line, on the other hand, all farm land is communal and operated mostly by black farmers. Livestock is not constrained by fences and often ventures onto roads. [ 2 ] The red line is a highly guarded line which has roadblocks to check every vehicle which passes. The red line is the reason for Namibia's unique status to export meat across the European Union. The demarcation was created in 1896 in the hope of containing a rinderpest outbreak in the Imperial German colony of South West Africa . Its name stems from the depiction in red ink on a 1911 map created by the German colonial administration. [ 3 ] Fort Namutoni was built as a police station to control north–south travel of the indigenous population and their livestock. The line continued to Okaukuejo in the west and Otjituuo in the east. Nevertheless, the epidemic reached Windhoek in 1897, wiping out half of the cattle population of the OvaHerero people . [ 4 ] The demarcation became a political boundary in 1907, after the Reichstag in Berlin passed a resolution in 1905 during the Herero Wars stating that police protection in German South West Africa "should be restricted to the smallest possible area focusing on those regions where our economic interests tend to coalesce". [ 5 ] The border reflected areas of colonial control, and was already monitored for animal health purposes. The excluded northern areas were largely left to indirect colonial rule through traditional authorities. Passage between the two zones was then restricted for individuals as well as for animals. This led to different political and economic outcomes for the northern Ovambo people and the more centrally located Herero people . [ 6 ] The Red Line was moved several times. A physical fence was only built in the early 1960s, and from then on used to isolate foot-and-mouth disease outbreaks in the North from the farms in the South. As during German colonialisation, it also served to facilitate the apartheid movement's restrictions and influence on people. [ 7 ] The fence stretches across the north of the country and has often been slightly modified over the years. Currently [ when? ] it runs north of Palmwag , past Okaukuejo , along the southern border of the Etosha Pan , through Tsintsabis and eastwards to Otjituuo (east of Grootfontein). North of the line lies about a third of Namibia's land surface. Livestock north of the Red Lines may not be sold overseas, while farmers in the South can sell their meat anywhere. Furthermore, even to access markets in South Africa and the rest of Namibia, animals from north of the fence must be quarantined for 21 days, raising the cost of their marketing. Subsequently, animals are usually slaughtered and sold without crossing the veterinary cordon fence. The issues of the red line restrictions have become controversial amidst a 2008 meat market boom. [ 8 ] Since the Independence of Namibia in the 1990s, the government has been fighting to remove the Red Line and allow prosperity in these regions. [ citation needed ] The aim is to build infrastructure, deconcentrate farms and promote the building of farms on virgin lands. Since this line has been deeply embedded in political and historical issues, the government has proposed uprooting it to the Angolan border. This has caused some concern that the disease will spread to uninfected areas, although areas like Kunene have not had outbreaks in over 30 years and are advocating for this line movement. [ 8 ] There were three outbreaks of foot and mouth disease in Namibia in 2020, all north of the line, the first on 8 August and the second on 13 August in the Caprivi. [ 9 ] The third occurred in the Oshikoto Region on 28 December. [ 10 ]
https://en.wikipedia.org/wiki/Red_Line_(Namibia)
The Living Building Challenge (LBC) Red List contains chemicals commonly used in building materials that have been designated as harmful to "health and the environment". The International Living Future Institute (ILFI) created the list in 2006, and is the only organization that uses the term 'Red List'. [ 1 ] Chemicals on the red list may not be included in materials used in construction that seeks to meet the criteria of the Living Building Challenge (LBC). According to ILFI, the list is composed of materials that should be phased out of production due to health concerns. The list is now updated annually. [ 1 ] The 2024 LBC red list has over twelve thousand items each identified with a CAS Registry Number . [ 2 ] This list includes the following chemical groups: [ 1 ] In addition to this red list, LBC criteria mandate that petrochemical fertilizers and pesticides cannot be used during the certification period or be used in operations and maintenance. [ 3 ] The Red List and the Living Building Challenge The Living Building Challenge includes seven performance categories, titled as petals. The red list falls under the materials petal. A building project may not contain any of the Red List chemicals or chemical groups. There is an exception for small components in complex products. [ 4 ] Each of these exceptions must include a written explanation. These exceptions will only be approved with a copy of the letter sent to the manufacturer stating that the product purchase does not ensure an endorsement. In addition, the final documentation must include a statement that asks the manufacturer to stop using the red list material or chemical. There are also temporary red list exceptions for numerous red list items for which viable alternatives are not yet commercially available. Declare Product Label Declare is a product labeling program that relies on the LBC Red List as its primary basis for material evaluation. [ 5 ] In creating a Declare label for a product, a manufacturer must disclose all of that product's intentionally added constituent chemicals to the designated 100 parts per million (ppm) reporting threshold. Additionally, the manufacturer must report the extent to which that product is compliant with the Red List. The three compliance levels are: (1) LBC Red List free, which means that the product is free of all red list ingredients; (2) LBC compliant, which means that the product contains some chemicals that ILFI has designated as temporary red list exceptions; or (3) declared, which means that the product is not compliant with the Red List or its temporary exceptions. Products with Declare labels are included in the ILFI's Declare Product Database . A project compliant with the Living Building Challenge must include at least one Declare product for every 500 m 2 (5382 ft 2 ) of gross building area and must send Declare program information to at least 10 manufacturers not yet using Declare. [ 6 ] Several other entities in the building industry have developed lists that operate in a similar manner to the LBC Red List. Three of them are described below. The Cradle to Cradle Products Innovation Institute (C2CPII) is a non-profit group that develops and administers the Cradle-to-Cradle Certified Product Standard. This multi-attribute standard evaluates a product's performance in five impact categories: material health, material reutilization, renewable energy and carbon management, water stewardship, and social fairness. In this product standard, the material health evaluation criteria include compliance with a banned chemical list for bronze level certification. Certified products may not contain listed chemicals as intentionally added ingredients above 1000 ppm. According to C2CPII, chemicals are selected for inclusion on the list due to their tendency to accumulate in the biosphere and lead to irreversible negative human health effects. Additionally, several substances were selected due to the hazards associated with their manufacture, use, and disposal. The Perkins and Will Transparency List includes substances commonly found in the built environment that regulatory entities have classified as being harmful to human and/or environmental health. Because these regulatory designations are constantly evolving, the list is updated as new information is published. The tool is fundamentally grounded in the concept of the Precautionary Principle . [ 7 ] In 2011, the U.S. Green Building Council (USGBC) piloted a credit for its Leadership in Energy and Environmental Design (LEED) rating system that intended to reduce the quantity of indoor contaminants that are harmful to the comfort and well-being of installers and occupants. This pilot credit , which was not included in LEED Version 4, required that specific interior building materials and products not contain listed chemicals for all applicable materials. The list includes halogenated flame retardants and phthalates .
https://en.wikipedia.org/wiki/Red_List_building_materials
List by Bulgarian Academy of Sciences See here for a comprehensive list of IUCN Red Lists See here for a comprehensive List of Red Lists by country. The Red List of Bulgarian Vascular Plants is a detailed publication that catalogues the national threat status of 801 species of vascular plants from Bulgaria. [ 1 ] This list has been evaluated using Version 3.1 of the IUCN Red List Categories and Criteria. [ 1 ]
https://en.wikipedia.org/wiki/Red_List_of_the_Bulgarian_Vascular_Plants
The Red Queen's hypothesis is a hypothesis in evolutionary biology proposed in 1973, that species must constantly adapt , evolve , and proliferate in order to survive while pitted against ever-evolving opposing species. The hypothesis was intended to explain the constant (age-independent) extinction probability as observed in the paleontological record caused by co-evolution between competing species ; [ 1 ] however, it has also been suggested that the Red Queen hypothesis explains the advantage of sexual reproduction (as opposed to asexual reproduction ) at the level of individuals, [ 2 ] and the positive correlation between speciation and extinction rates in most higher taxa . [ 3 ] In 1973, Leigh Van Valen proposed the hypothesis as an "explanatory tangent" to explain the "law of extinction" known as "Van Valen's law", [ 1 ] which states that the probability of extinction does not depend on the lifetime of the species or higher-rank taxon, instead being constant over millions of years for any given taxon. However, the probability of extinction is strongly related to adaptive zones , because different taxa have different probabilities of extinction. [ 1 ] In other words, extinction of a species occurs randomly with respect to age, but nonrandomly with respect to ecology. Collectively, these two observations suggest that the effective environment of any homogeneous group of organisms deteriorates at a stochastically constant rate. Van Valen proposed that this is the result of an evolutionary zero-sum game driven by interspecific competition : the evolutionary progress (= increase in fitness ) of one species deteriorates the fitness of coexisting species, but because coexisting species evolve as well, no one species gains a long-term increase in fitness, and the overall fitness of the system remains constant. Van Valen named the hypothesis "Red Queen" because under his hypothesis, species have to "run" or evolve in order to stay in the same place, or else go extinct as the Red Queen said to Alice in Lewis Carroll 's Through the Looking-Glass in her explanation of the nature of Looking-Glass Land: Now, here , you see, it takes all the running you can do, to keep in the same place. [ 4 ] Palaeontological data suggest that high speciation rates correlate with high extinction rates in almost all major taxa. [ 5 ] [ 6 ] This correlation has been attributed to a number of ecological factors, [ 7 ] but it may result also from a Red Queen situation, in which each speciation event in a clade deteriorates the fitness of coexisting species in the same clade (provided that there is phylogenetic niche conservatism ). [ 3 ] Discussions of the evolution of sex were not part of Van Valen's Red Queen hypothesis, which addressed evolution at scales above the species level. The microevolutionary version of the Red Queen hypothesis was proposed by Bell (1982), also citing Lewis Carroll, but not citing Van Valen. The Red Queen hypothesis is used independently by Hartung [ 8 ] and Bell to explain the evolution of sex, [ 2 ] by John Jaenike to explain the maintenance of sex [ 9 ] and W. D. Hamilton to explain the role of sex in response to parasites. [ 10 ] [ 11 ] In all cases, sexual reproduction confers species variability and a faster generational response to selection by making offspring genetically unique. Sexual species are able to improve their genotype in changing conditions. Consequently, co-evolutionary interactions, between host and parasite, for example, may select for sexual reproduction in hosts in order to reduce the risk of infection. Oscillations in genotype frequencies are observed between parasites and hosts in an antagonistic coevolutionary way [ 12 ] without necessitating changes to the phenotype. In multi-host and multi-parasite coevolution, the Red Queen dynamics could affect what host and parasite types will become dominant or rare. [ 13 ] Science writer Matt Ridley popularized the term in connection with sexual selection in his 1993 book The Red Queen , in which he discussed the debate in theoretical biology over the adaptive benefit of sexual reproduction to those species in which it appears. The connection of the Red Queen to this debate arises from the fact that the traditionally accepted Vicar of Bray hypothesis only showed adaptive benefit at the level of the species or group, not at the level of the gene (although the protean "Vicar of Bray" adaptation is very useful to some species that belong to the lower levels of the food chain ). By contrast, a Red-Queen-type thesis suggesting that organisms are running cyclic arms races with their parasites can explain the utility of sexual reproduction at the level of the gene by positing that the role of sex is to preserve genes that are currently disadvantageous, but that will become advantageous against the background of a likely future population of parasites. However, the assumption of the Red Queen hypothesis, that the primary factor in maintaining sexual reproduction is the generation of genetic variation does not appear to be generally applicable. Ruderfer et al. [ 14 ] analyzed the ancestry of strains of the yeasts Saccharomyces cerevisiae and Saccharomyces paradoxus under natural conditions and concluded that outcrossing occurs only about once every 50,000 cell divisions. This low frequency of outcrossing implies that there is little opportunity for the production of recombinational variation. In nature, mating is likely most often between closely related yeast cells. Mating occurs when haploid cells of opposite mating type MATa and MATα come into contact, and Ruderfer et al. [ 14 ] pointed out that such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same ascus , the sac that contains the cells directly produced by a single meiosis , and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type with which they can mate. The relative rarity in nature of meiotic events that result from outcrossing is inconsistent with the idea that production of genetic variation is the main selective force maintaining meiosis in this organism (as would be expected by the Red Queen hypothesis). However, these findings in yeast are consistent with the alternative idea that the main selective force maintaining meiosis is enhanced recombinational repair of DNA damage , [ 15 ] since this benefit is realized during each meiosis, whether or not out-crossing occurs. Further evidence of the Red Queen hypothesis was observed in allelic effects under sexual selection. The Red Queen hypothesis leads to the understanding that allelic recombination is advantageous for populations that engage in aggressive biotic interactions, such as predator–prey or parasite–host interactions. In cases of parasite-host relations, sexual reproduction can quicken the production of new multi-locus genotypes allowing the host to escape parasites that have adapted to the prior generations of typical hosts. [ 16 ] Mutational effects can be represented by models to describe how recombination through sexual reproduction can be advantageous. According to the mutational deterministic hypothesis, if the deleterious mutation rate is high, and if those mutations interact to cause a general decline in organismal fitness, then sexual reproduction provides an advantage over asexually reproducing organisms by allowing populations to eliminate the deleterious mutations not only more rapidly, but also most effectively. [ 16 ] Recombination is one of the fundamental means that explain why many organisms have evolved to reproduce sexually. Sexual organisms must spend resources to find mates. In the case of sexual dimorphism , usually one of the sexes contributes more to the survival of their offspring (usually the mother). In such cases, the only adaptive benefit of having a second sex is the possibility of sexual selection, by which organisms can improve their genotype. Evidence for this explanation for the evolution of sex is provided by the comparison of the rate of molecular evolution of genes for kinases and immunoglobulins in the immune system with genes coding other proteins . The genes coding for immune system proteins evolve considerably faster. [ 17 ] [ 18 ] Further evidence for the Red Queen hypothesis was provided by observing long-term dynamics and parasite coevolution in a mixed sexual and asexual population of snails ( Potamopyrgus antipodarum ). The number of sexuals, the number of asexuals, and the rates of parasitic infection for both were monitored. It was found that clones that were plentiful at the beginning of the study became more susceptible to parasites over time. As parasite infections increased, the once-plentiful clones dwindled dramatically in number. Some clonal types disappeared entirely. Meanwhile, sexual snail populations remained much more stable over time. [ 19 ] [ 20 ] On the other hand, Hanley et al. [ 21 ] studied mite infestations of a parthenogenetic gecko species and its two related sexual ancestral species. Contrary to expectation based on the Red Queen hypothesis, they found that the prevalence, abundance and mean intensity of mites in sexual geckos was significantly higher than in asexuals sharing the same habitat. Critics of the Red Queen hypothesis question whether the constantly changing environment of hosts and parasites is sufficiently common to explain the evolution of sexual reproduction . In particular, Otto and Nuismer [ 22 ] presented findings showing that species interactions (e.g. host vs parasite interactions) usually select against sexual reproduction. They concluded that, even though the Red Queen hypothesis favors sex under certain circumstances, it alone does not account for the ubiquity of sex. Otto and Gerstein [ 23 ] further stated that "it seems doubtful to us that strong selection per gene is sufficiently commonplace for the Red Queen hypothesis to explain the ubiquity of sex". Parker [ 24 ] reviewed numerous genetic studies on plant disease resistance and failed to uncover a single example consistent with the assumptions of the Red Queen hypothesis. In 2011, researchers used the microscopic roundworm Caenorhabditis elegans as a host and the pathogenic bacterium Serratia marcescens to generate a host–parasite coevolutionary system in a controlled environment, allowing them to conduct more than 70 evolution experiments testing the Red Queen hypothesis. They genetically manipulated the mating system of C. elegans , causing populations to mate either sexually, by self-fertilization, or a mixture of both within the same population. Then they exposed those populations to the S. marcescens parasite. It was found that the self-fertilizing populations of C. elegans were rapidly driven extinct by the coevolving parasites, while sex allowed populations to keep pace with their parasites, a result consistent with the Red Queen hypothesis. [ 25 ] [ 26 ] However, a study of the frequency of outcrossing in natural populations showed that self-fertilization is the predominant mode of reproduction in C. elegans , with infrequent outcrossing events occurring at a rate of around 1%. [ 27 ] Although meioses that result in selfing are unlikely to contribute significantly to beneficial genetic variability, these meioses may provide the adaptive benefit of recombinational repair of DNA damages that arise, especially under stressful conditions. [ 28 ] Currently, there is no consensus among biologists on the main selective forces maintaining sex. The competing models to explain the adaptive function of sex have been reviewed by Birdsell and Wills. [ 29 ] The Red Queen hypothesis has been invoked by some authors to explain evolution of aging. [ 30 ] [ 31 ] The main idea is that aging is favored by natural selection since it allows faster adaptation to changing conditions, especially in order to keep pace with the evolution of pathogens, predators and prey. [ 31 ] A number of predator/prey species couple compete via running speed. "The rabbit runs faster than the fox, because the rabbit is running for his life while the fox is only running for his dinner." Aesop [ 32 ] The predator-prey relationship can also be established in the microbial world, producing the same evolutionary phenomenon that occurs in the case of foxes and rabbits. A recently observed example has as protagonists M. xanthus (predator) and E. coli (prey) in which a parallel evolution of both species can be observed through genomic and phenotypic modifications, producing in future generations a better adaptation of one of the species that is counteracted by the evolution of the other, thus generating an arms race that can only be stopped by the extinction of one of the species. [ 33 ] The interactions between parasitoid wasps and insect larvae, necessary for the parasitic wasp's life cycle, are also a good illustration of a race. Evolutionary strategy was found by both partners to respond to the pressure generated by the mutual association of lineages. For example, the parasitoid wasp group, Campoletis sonorensis , is able to fight against the immune system of its hosts, Heliothis virescens ( Lepidopteran ) with the association of a polydnavirus (PDV) ( Campoletis sonorensis PDV). During the oviposition process, the parasitoid transmits the virus ( Cs PDV) to the insect larva. The Cs PDV will alter the physiology, growth and development of the infected insect larvae to the benefit of the parasitoid. [ 34 ] A competing evolutionary idea is the court jester hypothesis , which indicates that an arms race is not the driving force of evolution on a large scale, but rather it is abiotic factors. [ 35 ] [ 36 ] The Black Queen hypothesis is a theory of reductive evolution that suggests natural selection can drive organisms to reduce their genome size. [ 37 ] In other words, a gene that confers a vital biological function can become dispensable for an individual organism if its community members express that gene in a "leaky" fashion. Like the Red Queen hypothesis, the Black Queen hypothesis is a theory of co-evolution. Van Valen originally submitted his article to the Journal of Theoretical Biology , where it was accepted for publication. However, because "the manner of processing depended on payment of page charges", [ 1 ] Van Valen withdrew his manuscript and founded a new Journal called Evolutionary Theory , in which he published his manuscript as the first paper. Van Valen's acknowledgement to the National Science Foundation ran: "I thank the National Science Foundation for regularly rejecting my (honest) grant applications for work on real organisms, thus forcing me into theoretical work". [ 1 ]
https://en.wikipedia.org/wiki/Red_Queen_hypothesis
The Xbox 360 video game console was subject to a number of technical problems and failures, some as a result of design flaws. Some issues could be identified by a pattern of red lights on the front face of the console; these colloquially became known as the " Red Ring of Death " or the " RRoD ". [ 1 ] [ 2 ] There were also other issues, such as discs becoming scratched in the drive and " bricking " of consoles due to dashboard updates. There were many conflicting estimates of the console's unusually high failure rate . [ 3 ] [ 4 ] [ 5 ] The warranty provider SquareTrade estimated it at 23.7% in 2009, [ 6 ] while a Game Informer survey reported 54.2%. [ 7 ] Among the consoles owned by employees of Joystiq , which saw heavy use for games journalism purposes, the failure rate had reached 90% by the end of 2007. [ 8 ] The crisis was ultimately abated from 2009 by design revisions to the later-produced Xbox models; the S model in particular was far more resilient. By 2012 the failure rate for the Xbox 360 family was comparable to the PS3 failure rate. [ 9 ] The issues proved extremely damaging for Microsoft . Repairs and shipping of replacement hardware cost the company $1.15bn. The issues triggered multiple lawsuits , [ 10 ] cost the Xbox ground in the console wars and threatened the long term viability of the Xbox brand. [ 11 ] The design of the Xbox 360 was a hurried process subject to a number of late changes. This included the addition of a hard disk drive, which compromised airflow in the machine. The holes in the case were added to try and ameliorate this airflow issue. Time pressures also resulted in insufficient testing. Microsoft were aware of a myriad of technical challenges as early as August 2005, including "overheating graphics chips, cracking heat sinks, cosmetic issues with the hard disk and the front of the box, underperforming graphics memory chips from Infineon , a problem with the DVD drive - and more". Thermal issues with the GPU were ultimately what caused the infamous "Red Ring" issues, while the DVD drive issue was later responsible for scratching discs. An engineer requested a shut down of the production line that month, but this did not occur out of fear of a delay to console delivery in some regions. [ 12 ] The console launched in November 2005 in North America, swiftly followed by other regions. However, consoles began failing "almost immediately". Microsoft initially dismissed these concerns as "isolated reports", that were within the normal range of failure (around 2%). [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] In late 2005, Microsoft's internal data was reporting a failure rate during manufacturing of around 6-7%. These consoles were not shipped to consumers but remained in warehouses. By March 2006, around 30% of consoles manufactured were either returned or had failed checks at the factory. At one point Microsoft's yield was as low as 32% (meaning a failure rate of 68%) [ 17 ] Peter Moore , the Vice President of Microsoft's Interactive Entertainment Business division in 2015 detailed a conversation he had with Microsoft CEO Steve Ballmer on his planned response to the incident in the mid 2000s. He stated: "...here's what we have to do: we need to FedEx an empty box to a customer who had a problem - they would call us up - with a FedEx return label to send your box, and then we would FedEx it back to them and fix it. ... I always remember $240m of that was FedEx. ... It was sickening. I was doing a lot of interviews. ... We couldn't figure it out. ... There was a theory. We had changed our solder, which is the way you put the GPU and the fans, to lead-free. ... We think it was somehow the heat coming off the GPU was drying out some of the solder, and it wasn't the normal stuff we'd used, because we had to meet European Standards and take the lead out. ... He said, 'what's it going to cost?' I remember taking a deep breath, looking at Robbie, and saying, 'we think it's $1.15bn, Steve.' He said, 'do it.' There was no hesitation. ... If we hadn't made that decision there and then, and tried to fudge over this problem, then the Xbox brand and Xbox One wouldn't exist today." [ 11 ] In July 2007 Moore published an open letter recognizing the console's problems, as well as announcing a three-year warranty from the original date of purchase for every Xbox 360 console that experienced the "general hardware failure" (Red Ring) issue. [ 18 ] That October a class action lawsuit was brought against Microsoft due to the problems the console had with disc scratching, which could render games unplayable. [ 19 ] The case was lengthy and worked through the court system over the following decade, with litigation focusing on the validity of class certification. In 2017 the matter was decided by the United States Supreme Court in Microsoft Corp. v. Baker , which settled in favour of Microsoft. During the Game Developers Conference in February 2008, Microsoft announced that the failure rate had "dropped", but did not mention any specifics. [ 20 ] The same month, electronics warranty provider SquareTrade published an examination of 1040 Xbox 360s and said that they suffered from a failure rate of 16.4%. Of the 171 failures, 60% were due to a general hardware failure (and thus fell under the 3 year extended warranty). And of the remaining 40% which were not covered by the extended warranty, 18% were disc read errors, 13% were video card failures, 13% were hard drive freezes, 10% were power issues and 7% were disc tray malfunctions. [ 21 ] [ 22 ] SquareTrade also stated that its estimates are likely significantly lower than reality due to the time span of the sample (six to ten months), the eventual failure of many consoles that did not occur within this time span and the fact that most owners did not deal with SquareTrade and had their consoles repaired directly through Microsoft via the extended RROD warranty. From 2009 the crisis began to abate due to design revisions. The Jasper models sold that year had a failure rate of under 4%, with the overall product family rate at around 12% in the first quarter. [ 23 ] The Xbox 360 S launched in 2010 and had a far lower failure rate. The S models did not include segmented outer ring lights like the launch model, and were not included in the extended warranty. [ 24 ] The 360 family as a whole was discontinued in 2016, but Microsoft continued to offer repairs for a time after that. [ 25 ] Microsoft did not reveal the full technical details of the problem until a 2021 documentary on the history of Xbox, though earlier independant investigations had correctly indentified the issues with the GPU and soldering. [ 26 ] In a nod to the incident, Microsoft sold Red Ring holiday sweaters in December 2024. The item was popular among Microsoft employees. [ 27 ] The launch model of the Xbox 360 includes four lights in a ring around the power button, on the front face of the console. Green indicated normal operation, while red lights were used for error codes. Most famously, three red lights indicated a "general hardware failure". [ 28 ] The error was coined the "Red Ring of Death" after Windows ' Blue Screen of Death error. The error was sometimes preceded by freeze-ups, graphical problems in the middle of gameplay, such as checkerboard or pinstripe patterns on the screen, and sound errors; mostly consisting of extremely loud noises that couldn't be affected by the volume control, and the console only responding when the power button was pressed to turn it off. [ 29 ] The problem was most prevalent in early models. This error code was usually caused by the failure of one or more hardware components, although it could indicate that the console is not receiving enough power from the power supply. This coould be caused by a faulty or improperly connected power supply. The three flashing lights could also be caused by power surges. Unplugging and restarting the console fixed this issue in some cases. [ 30 ] [ 31 ] On the Xbox 360 S and E models, the power button utilizes a different design that does not incorporate the same style of lighting that past models used. [ 32 ] A flashing red light means that the console is overheating, similar to the two-light error code on the original model Xbox 360; however, an on-screen message also appears, telling the user that the console will automatically power off to protect itself from overheating. A solid red light is similar to the one-light error if an "E XX" error message is displayed and a three-light error code if the error message is absent. The related E74 error caused only a single of the red ring quadrants to illuminate, and the screen to display an error message in multiple languages: "System Error. Contact Xbox Customer Support", with the code E74 at the bottom. Much like the infamous Red Ring issue, the error was related to connection issues with the GPU, but could also be caused by a more general GPU failure or failing eDRAM. The E74 issue was covered by the three-year extended warranty from 2009 as Microsoft considered it part of the same issue as the Red Ring, and customers who previously paid Microsoft for out-of-warranty service to correct the E74 error received a refund. [ 33 ] [ 34 ] [ 35 ] The console would illuminate all four lights if it could not detect an AV cable. This was not triggered by later revisions of the console which included an HDMI port. In some cases the four lights indicated a more serious problem with the console, followed by a 2-digit error code. [ 36 ] The four lights would also be illuminated briefly by power issues such as surges or brief outages. Microsoft did not reveal the cause of the issues publicly until 2021, when a 6-part documentary on the history of Xbox was released. The Red Ring issue was caused by the cracking of solder joints inside the GPU flip chip package, connecting the GPU to the substrate interposer, as a result of thermal stress from heating up and cooling back down when the system is power cycled. [ 37 ] Microsoft had switched to lead-free solder due to regulations in the European Union , but using the incorrect alternative resulted in fracturing. [ 12 ] While the cause was not confirmed by Microsoft until 2021, many independant investigations came to similar conclusions at the time, identifying thermal stress on the GPU and the solder as the culprit. The German computer magazine c't blamed the problem primarily on the use of the wrong type of lead-free solder, a type that when exposed to elevated temperatures for extended periods of time becomes brittle and can develop hair-line cracks that are almost irreparable. [ 38 ] Microsoft designed the chip in-house to cut out the traditional ASIC vendor with the goal of saving money in ASIC design costs. After multiple product failures, Microsoft went back to an ASIC vendor and had the chip redesigned so it would dissipate more heat. [ 39 ] [ 40 ] The Guardian also claimed that using Xbox Kinect with an old Xenon generation Xbox would cause the Red Ring, but this was denied by Microsoft. [ 41 ] The design of the disc drive was flawed, and could cause scratches on discs, particularly if the console was moved while the disc was spinning. Unlike the Red Ring issues, the disc scratching was not resolved by hardware revisions and was present in the S and E models. Those versions shipped with a sticker informing users that moving the console while powered on posed a risk. [ 42 ] Even on static footing however, normal floor vibrations that would occur in a household environment were enough to cause disc scratches. [ 43 ] The issue was particularly prevalent in 2006 models. The issue was subject to multiple independant investigations, initially by the Dutch television program Kassa and later by the European Commissioner for Consumer Protection and the BBC . The BBC investigation in particular involved laboratory conditions for testing. [ 44 ] The issue ultimately led to a Supreme Court case which was ruled in favour of Microsoft in 2017. [ 45 ] [ 46 ] Although discs scratched by the Xbox 360 were not covered under its warranty, [ 47 ] Microsoft's Xbox Disc Replacement Program [ 48 ] sold customers a new copy of discs scratched by the Xbox 360, if they were published in countries where the Xbox was originally sold, at a cost of $20. [ 49 ] The published list of games that qualify, however, was limited. [ 50 ] Third party games were only ever replaced at the discretion of the publishers. Electronic Arts for example offered replacements made within 90 days of purchase. [ 51 ] Independant investigations concluded that the disc drives lacked a mechanism to secure the disc solidly in place. [ 52 ] Tilting or moving the console, when operating with a disc spinning inside, can potentially cause damage to the disc and in some cases render the disc unplayable as a result. [ 53 ] Microsoft engineers were aware of the issue ahead of launch, around September or October of 2005. However, installing "bumpers" to prevent the discs moving out of alignment would have added 50 cents to the production cost of each console, and was not implemented. An alternative would have been to slow the disc rotation speed but this would have led to increased loading times, and magnetic adjustments would not have been possible due to the disc tray locking mechanism. [ 54 ] Several Xbox 360 system updates caused major issues for users. An update patch released on November 1, 2006 was reported to " brick " consoles, rendering them useless. [ 55 ] The most obvious issue occurs after the installation of the patch, after which the console immediately reboots and shows an error message. Usually, error code E71 is shown during or directly after the booting animation. In response to the November 2006 update error that "bricked" his console, a California man filed a class action lawsuit against Microsoft in Washington federal court in early December 2006. [ 56 ] The lawsuit seeks $5 million in damages and the free repair of any console rendered unusable by the update. This was the second such lawsuit filed against Microsoft, the first having been filed in December 2005, shortly after the 360's launch. Following Microsoft's extension of the Xbox 360 warranty to a full year, from the previous 90 days, the California man's attorney confirmed to the Seattle Post Intelligencer that the lawsuit had been resolved under confidential terms. [ 57 ] On November 19, 2008, Microsoft released the " New Xbox Experience " (NXE). This update provided streaming Netflix capability and avatars; however, some users have reported the update has caused their consoles to not properly read optical media. [ 1 ] Others have reported that the update has disabled audio through HDMI connections. [ 58 ] A Microsoft spokesperson stated the company is "aware that a handful of Xbox LIVE users are experiencing audio issues, and are diligently monitoring this issue and working towards a solution." Microsoft released a patch on February 3, 2009 for the HDMI audio issues. [ 59 ] A patch released in May 2011 prevented some users from playing games from discs. The update involved "a change in the disc reading algorithms", but would simply inform users that the disc was unreadable and ask them to clean it with a cloth. [ 60 ] In 2007, the official steering wheel peripheral faced issues with overheating and releasing smoke, prompting the "Hotwheels" nickname. Microsoft encouraged users to only use the steering wheel in battery mode rather than while plugged in. [ 61 ] That August a product recall was issued, with Microsoft retrofitting the existing steering wheels to remedy the problem. [ 62 ] The Nyko Intercooler was a popular aftermarket cooler, purchased by users who wished to improve air flow in an attempt to avoid the red-ring issue. While the exact cause of red-ring was not yet public in the late 2000s, it was known that temperature was an issue. [ 63 ] [ 64 ] Unfortunately, the Nyko Intercooler itself had issues and its usage could cause the red-ring or damage the power DC input. [ 64 ] The Intercooler could also melt itself onto the 360, melt the powercord, or make itself extremely hard to remove. [ 65 ] Microsoft stated that the peripheral drained too much power from the console (the Intercooler power cord was installed between the Xbox 360 power supply and the console itself), could cause faults to occur, and stated that consoles fitted with the peripheral would have their warranties voided. Nyko released an updated Intercooler that used its own power source, and claimed the problem no longer occurred, but this did not affect Microsoft's stance on the warranty.
https://en.wikipedia.org/wiki/Red_Ring_of_Death
The Red Rose of Lancaster ( blazoned : a rose gules ) was the heraldic badge adopted by the royal House of Lancaster in the 14th century. In the modern era , it symbolises the county of Lancashire . The exact species or cultivar which it represents is thought to be Rosa gallica officinalis . John of Gaunt 's younger brother Edmund of Langley, 1st Duke of York (1341–1402), adopted the White rose of York as his heraldic badge. His descendants fought for control of the throne of England during several decades of civil warfare, which became known as the Wars of the Roses , after the heraldry of the House of York Adopted after the civil wars of the fifteenth century had ended, the red rose was the symbol of the English Monarchy. The opposition of the roses was a romantic invention created after the fact, and the Tudor arts under poets like Shakespeare gave the wars their popular conception: The Wars of the Roses , coined in the 19th century. The conflict was ended by King Henry VII of England who, upon marrying Elizabeth of York , created the Tudor rose , the symbol of the Tudor dynasty . Lancaster's Red Rose (also known as Apothecary's Rose , Old Red Damask and Rose of Provins) is an official variety and is possibly the first cultivated rose . The rose grew wild throughout Central Asia and was discovered by the ancient Persians and Egyptians . Later adopted by the Romans , who introduced it to Gaul (France) where it assumed the name Rosa gallica . It is documented that Charlemagne 's court exploited the rose as a perfume . The rose was also appreciated for its medical value and was utilized in countless medical remedies . The Red Rose of Lancaster derives from the gold rose badge of Edward I of England . Other members of his family used variants of the royal badge, with the king's brother, the Earl of Lancaster, [ who? ] using a red rose. [ 1 ] It is incorrectly believed that the Red Rose of Lancaster was the House of Lancaster's badge during the Wars of the Roses . Evidence for this "wearing of the rose" includes scant evidence. [ 2 ] there are, however, doubts as to whether the red rose was actually an emblem taken up by the Lancastrians during the Wars of the Roses. Adrian Ailes has noted that the red rose “probably owes its popular usage to Henry VII quickly responding to the pre-existing Yorkist white rose in an age when signs and symbols could speak louder than words." It also allowed Henry to invent and exploit his most famous heraldic device, the Tudor Rose , combining the so-called Lancastrian red rose and the White Rose of York . This floral union neatly symbolised the restoration of peace and harmony and his marriage in January 1486 to Elizabeth of York. It was a brilliant piece of simple heraldic propaganda.” [ 3 ] The Tudor Rose is used as the plant badge of England ( Scotland uses the thistle , Ireland uses the shamrock , and Wales uses the leek ). The rose does not form any part of the insignia of the Duchy of Lancaster , but came to be seen as an emblem of the county of Lancashire , and as such was incorporated in the coats of arms of numerous Lancashire local authorities including the county council. Since 1974 a number of metropolitan boroughs in Greater Manchester and Merseyside have included red roses in their armorial bearings to show their formation from parts of Lancashire. It is also present in the crest of the coat of arms of the London Borough of Enfield . The traditional Lancashire flag , a red rose on a white field, was never officially registered with the Flag Institute and when this was attempted it was found that this flag had been registered by the town of Montrose, Scotland . As two flags of the same design can not be registered, Lancashire's official flag is now registered as a red rose on a yellow field. [ 4 ] [ 5 ] Today the Red Rose is still widely used, and not necessarily on a yellow background. Lancashire County Cricket Club still use the rose as an emblem. The Trafford Centre also features Red Roses in its architecture, most noticeably on all of the glass panes in the shopping centre. Lancashire GAA features a red rose on its emblem. Manchester City Football Club featured the red rose on the club badge from 1972 to 1997 and reinstated it in 2015, reflecting Manchester's history as part of Lancashire. [ 6 ] It also features on the badges of Blackburn Rovers , Bolton Wanderers , and Barrow . Edge Hill University in Ormskirk uses the Red Rose on a yellow background on its crest along with a Liver bird which signifies its current location (Lancashire) and origins in Liverpool. [ 7 ] The shield of Lancashire County Council's coat of arms , however, displays not one but three red roses, on gold piles on a red background. The arms have been official since 1903. [ 8 ] From the nineteenth century the red rose was part of the badge of a number of units of the British Army recruiting in the county. During the First World War , the rose was worn by 55th (West Lancashire) Division ; their motto was "They win or die, who wear the Rose of Lancaster". When the division was reformed in 1920 , it maintained the rose as its insignia. The cap badge of the Duke of Lancaster's Regiment , formed in 2006, features the rose. The Saskatoon Light Infantry of the Canadian Army also incorporated the red rose into the design of their cap badge and regimental buttons, due to an alliance with the York and Lancaster Regiment of the British Army . The Canadian city of Montreal has a Lancastrian rose in the top right hand corner of its flag , representing the city's historical English community. The U.S. City of Lancaster, Pennsylvania , known as "Red Rose City", uses the Lancastrian rose as its seal, and in its flag.
https://en.wikipedia.org/wiki/Red_Rose_of_Lancaster
The Red Sea and its extensions of the Gulf of Suez and the Gulf of Aqaba contain the largest recorded concentration of deep-sea brine pools on the planet. These pools have many features that make them uninhabitable to almost all organisms on the planet, yet certain communities of microbes thrive within these extreme environments that have temperatures ranging from 2.0 °C to 75 °C. [ 1 ] The Red Sea brine pools have extreme salt concentrations and varying compositions of nutrients and other chemicals that directly affect their microbiomes. There are approximately 25 individual pools in the region, [ 2 ] [ 3 ] some of which are closely clustered together in groups, leading to their undetermined classification of names. The brine pools originate from hydrothermal vents , the shifting of tectonic plates , and the accumulation of water with properties that make it unsuitable for mixing, leading to its accumulation within faults and divots in the sea floor. Atlantis II Deep , Discovery Deep, and the Kebrit are the most investigated and researched brine pools within the Red Sea. [ 4 ] Additionally, many microbial species form beneficial symbiotic relationships with organisms living and feeding in proximity to the pools. These relationships allow for the study of specialized adaptations of microbes to brine pool environments. In addition to the originally-discovered warm brine pools, recent discoveries have found four smaller warm brine pools named the NEOM Brine Pools located in the Gulf of Aqaba . Furthermore, multiple cold seeps have been identified in the Red Sea (the Thuwal Cold Seeps), consisting of two individual pools. Three of these Red Sea brine pools are unnamed, as they are small and potentially extensions of other nearby larger pools. [ citation needed ] The virus community within the many Red Sea brine pools is largely unexplored. However, with the use of metagenomics , viral communities of the Atlantis II Deep , Discovery Deep, and the Kebrit Deep reveal diverse and distinct viruses within and between the brine pools. Across all three brine pools, double-stranded DNA (dsDNA) are the most dominant viruses. [ 5 ] [ 6 ] Of the dsDNA viruses investigated, Caudovirales are the most abundant across all three brine pools. Low abundances of Phycodnaviridae and trace amounts of Iridoviridae are also present within the brine-seawater interfaces, and thus may be indicative of a "pickling" effect rather than a host-specific presence. [ 5 ] Viral species tend to follow their bacterial-host population dynamics. Bacterial and archaeal composition and abundance differ between specific layers of the brine pool, including the overlying brine seawater, the brine-water interface, the brine-pool sediments, and direct brine waters. [ 7 ] [ 8 ] [ 9 ] As a result, the viral community within the brine pools of the Red Sea are stratified across the brine-seawater interface. [ 10 ] The Kebrit Deep's brine-seawater interface upper layer is dominated by marine bacteria-infecting viruses, relative to the lower layer brine-seawater interface which is dominated by haloviruses and halophages. [ 5 ] Deep-sea marine viruses maintain the diversity and abundance of the microbial community, recycling and supplying essential nutrients and biomolecules , and regulating the biogeochemical cycling . [ 11 ] [ 12 ] [ 13 ] [ 14 ] In deep, anoxic environments such as the Red Sea brine pools, viral infection of prokaryotes releases cellular DNA. Extracellular DNA released through infection supplies highly labile biomolecules in these water conditions limited by external input supporting microbial communities. [ 13 ] Through lysogenic viral infection and horizontal gene transfer , the viral community in the Red Sea brine pools contribute to microbial DNA repair, nucleotide metabolism, [ 15 ] and the evolutionary adaptations of the microbial community. [ 6 ] [ 15 ] The Red Sea brine pools were once thought to be inhospitable to life. [ 7 ] However, extremophiles have adapted to these environments through the development of novel enzymes and metabolic pathways. [ 16 ] [ 4 ] [ 17 ] The various brine pools contain somewhat similar diversities of microbes; however, due to the different characteristics of each brine pool, distinct microbe compositions are seen. Similarly to the Gulf of Mexico [ 18 ] brine pools, the Red Sea brine pool experiences stratification within each distinct brine pool. [ 19 ] Therefore, as a result of the stratification, varying physical and chemical properties occur with respect to depth, ensuing a transition in the microbial community with respect to depth. [ 16 ] [ 7 ] Moreover, the stratification causes sharp brine-seawater interfaces, with typically-steep gradients in salinity, temperature, density, oxygen, and pH. These distinct interfaces between layers of well-mixed water are characteristic of liquids that are stabilized by salt but destabilized by heating from below. Heat at the bottom of these stable salinity gradients causes double-diffusive convection events. [ 1 ] Deep-sea anoxic brines (referred to as DHABs, deep hypersaline anoxic basins) are developed by a process of re-dissolving of evaporitic sediments buried at shallow depths, tectonic ejection of the interstitial brine reacted with the evaporites, or by hydrothermal phase separation. [ 20 ] These are examples of various types of bacteria (Table 1) under the brine pools: [ 21 ] Stratification within and around water layers is a characteristic of brine pools due to the highly saline environment. Specifically, in the Red Sea, as a result of this stratification in the deep sea brine pools, microbial communities are subject to differences their vertical distribution and composition. [ 22 ] For example, through the use of metagenomics and pyrosequencing , the microbial communities of two deeps ( Atlantis II and Discovery) were investigated with respect to vertical distribution. In terms of archaeal communities, both deeps showed similar composition having the upper layer (20–50 m) enriched in Halobacteriales , and as salt concentration increased and oxygen decreased, Desulfurococcales tended to dominate due to physiological adaptations. [ 22 ] [ 23 ] The bacterial composition in the upper layer consisted of Cyanobacteria due to the presence of light. Deeper in the water column, Proteobacteria , specifically the gamma -subdivision group (orders Thiotrichales , Salinisphaerales , Chromatiales , and Alteromonadales ) were found to dominate the more extreme conditions. [ 22 ] The stratification within the Red Sea Brine Pools therefore allows for a complex composition of the microbial community with depth. Due to the variability between each brine pool, this would account for differences in taxa at each location and at each depth. Extremozymes are very prominent in Red Sea brine pools as they have the ability to be able to catalyze reactions under harsh environments. [ 24 ] In general, extremozymes can be separated into categories depending on habitats, such as those that can resist extremes of cold ( psychrophiles ), heat ( thermophiles and hyperthermophiles ), acidity ( acidophiles ), alkalinity ( alkaliphiles ), and salinity ( halophiles ). [ 25 ] Red Sea brine pools are subject to host a polyextremophilic microbiological community providing the environment with a source of extremozymes. Moreover, most of the extremozymes are classified into three classes of enzymes: oxidoreductases , transferases , and hydrolases ; [ 21 ] these are important in terms of metabolic processes for the organisms within this habitat as well as for potential applications. [ 4 ] Several anoxic, high-salinity deep-sea basins in the Red Sea generate notably sharp interfaces that produce a variety of physicochemical gradients. [ 26 ] By acting as a particle trap for organic and inorganic elements from saltwater, brine pools have the ability to significantly increase the supply of nutrients and the possibility for bacterial growth. [ 27 ] On the other hand, halophilic bacteria are required to evolve specific structures to survive the brine pool habitat. For example, halophilic enzymes have a higher proportion of acidic amino acid residues than non-halophilic homologues. These bacterias accumate high concentrations of KCl in their cytoplasms, which reach saturation. [ 28 ] Recently, twelve enzymes have been detected in the Red Sea brine pools ( Atlantis II Deep , Discovery Deep, and Kebrit Deep) with specific biochemical properties that are promising in their potential applications. [ 4 ] The microbes that inhabit the hot, hypersaline, anoxic, and toxic-metal-contaminated Red Sea brine pools produce or accumulate microbial enzymes known as extremozymes allowing life to survive. [ 29 ] The chemical and physical properties, in addition to the stability of the extremozymes, provides potential uses in areas including industrial, biotechnical, and pharmaceutical disciplines. [ 4 ] [ 30 ] [ 31 ] The different enzymes can be attributed to the different organisms that live within each brine pool due to the environments' variable conditions. The Kebrit Deep, one of the smallest Red Sea brine pools, is at 21-23 °C not considered a hot brine. [ 4 ] Other characteristics include a pH of 5.2, an 84-m-thick brine layer, and high levels of hydrogen sulfide . [ 8 ] [ 32 ] The Atlantis II Deep is among the largest Red Sea brine pools and has high temperatures (~68 °C), a pH of 5.3, and high metal content. [ 33 ] [ 34 ] While Discovery Deep is similar to Atlantis II Deep, it has differences in metal content and is less extreme overall. [ 35 ] [ 36 ] The Thuwal cold seeps were accidentally discovered in the Red Sea at about 850m deep on 7 May 2010 by a remotely-operated vehicle . [ 54 ] The scientists were conducting a continental slope survey of the Red Sea as part of the KAUST Red Sea Expedition 2010. [ 54 ] These cold seeps occur along the tectonically-active continental margin within the Red Sea where hypersaline brine seeps out of the seabed and associates with brine pool formations. [ 54 ] The Thuwal cold seeps are considered "cold" due to their cooler temperature (about 21.7 °C) relative to other brine pools found within the Red Sea. [ citation needed ] Cold seeps are a component of deep sea ecosystems where chemosynthetic bacteria acting as the base of this community use the methane and hydrogen sulfide in seep water as their energy source. [ 55 ] The microbial community acts as a base of the food chain for an ecosystem of organisms that helps sustain and feed bottom- and filter-feeders such as bivalves . [ citation needed ] During a 2020 research expedition, with the use of bathymetry and geophysical observations, four complex brine pools were discovered in the northern Gulf of Aqaba , which had not yet been known to harbor brine pools. The discovery consisted of three small brine pools less than 10 m 2 and another pool that was 10,000 m 2 which were given the name NEOM Brine Pools. [ 31 ] The NEOM Brine Pools are distinct from other Red Sea brine pools as they are located much closer to the shore. Due to the brine pools' location at 2 km offshore, they are subject to sediment shed and as a result can preserve geophysical properties that could potentially give insight to historical tsunamis, flash floods, and earthquakes that may have occurred in the Gulf Aqaba. [ 31 ] Within these NEOM brine pools, stratification of the overlaying water, the interface, and the brine water caused stratification of microbial diversity. [ 31 ] The upper layer consisted of aerobic microbes such as Gammaproteobacteria , Thaumarchaeota Alphaproteobacteria , and Nitrospira . In the deeper convective layers of the NEOM pools, sulfate-reducing and methanogenic microorganisms were more abundant, given the anaerobic conditions. [ 31 ]
https://en.wikipedia.org/wiki/Red_Sea_brine_pool_microbiology
Red cell genotyping , [ 1 ] [ 2 ] [ 3 ] also known as blood group genotyping , [ 4 ] [ 5 ] [ 6 ] [ 7 ] is a molecular technique used to identify genetic variants responsible for antigens on the surface of red blood cells. Unlike traditional serological testing , which relies on the presence of antibodies to detect antigens , genotyping analyzes DNA to determine an individual's blood group profile with high accuracy. This approach is particularly valuable in complex transfusion cases, such as in patients with multiple alloantibodies, hemoglobinopathies , or recent transfusions that can obscure serological results. Red cell genotyping enhances transfusion safety by enabling precise donor-recipient matching, reducing the risk of alloimmunization , and improving outcomes for patients requiring chronic transfusions, such as sickle cell disease and thalassemia . The molecular testing of red cell antigens is often handled in conjunction with platelet and neutrophil antigens by professional organizations, such as the International Society of Blood Transfusion (ISBT) [ 8 ] and the Association for the Advancement of Blood & Biotherapies ( AABB ). [ 9 ] Blood group genotyping refers to the analysis of blood group antigens that are presented on the red cell membrane , including those attached to proteins, the carbohydrate components of glycoproteins and glycolipids , or anchored via Glycosylphosphatidylinositol (GPI)-linker. By October 2024, a total of 366 red cell antigens have been officially recognized by the ISBT , organized into 47 distinct human blood group systems . Red cell genotyping is preferred over blood group genotyping because it includes all antigens found on the red cell membrane , not just those officially recognized as blood group antigens."
https://en.wikipedia.org/wiki/Red_cell_genotyping
Red edge refers to the region of rapid change in reflectance of vegetation in the near infrared range of the electromagnetic spectrum . Chlorophyll contained in vegetation absorbs most of the light in the visible part of the spectrum but becomes almost transparent at wavelengths greater than 700 nm . The cellular structure of the vegetation then causes this infrared light to be reflected because each cell acts something like an elementary corner reflector . [ citation needed ] The change can be from 5% to 50% reflectance going from 680 nm to 730 nm. This is an advantage to plants in avoiding overheating during photosynthesis . For a more detailed explanation and a graph of the photosynthetically active radiation (PAR) spectral region, see Normalized difference vegetation index § Rationale . The phenomenon accounts for the brightness of foliage in infrared photography and is extensively utilized in the form of so-called vegetation indices (e.g. Normalized difference vegetation index ). It is used in remote sensing to monitor plant activity, and it has been suggested that it could be useful to detect light-harvesting organisms on distant planets. [ 1 ] This astrobiology -related article is a stub . You can help Wikipedia by expanding it . This remote sensing -related article is a stub . You can help Wikipedia by expanding it . This photosynthesis article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Red_edge
Lionel Thomas Caswall Rolt (usually abbreviated to Tom Rolt or L. T. C. Rolt ) (11 February 1910 – 9 May 1974 [ 1 ] [ 2 ] ) was a prolific English writer and the biographer of major civil engineering figures, including Isambard Kingdom Brunel and Thomas Telford . He is also regarded as one of the pioneers of the leisure cruising industry on Britain's inland waterways, and was an enthusiast for vintage cars and heritage railways . He played a pioneering role in both the canal and railway preservation movements. Tom Rolt was born in Chester to a line of Rolts "dedicated to hunting and procreation". His father Lionel had settled back in Britain in Hay-on-Wye after working on a cattle station in Australia and a plantation in India, and joining (unsuccessfully) in the Klondike Gold Rush of 1898. However, Lionel Rolt lost most of his money in 1920 after investing his capital in a company that failed, and the family moved to a pair of stone cottages in Stanley Pontlarge in Gloucestershire. [ 3 ] Rolt studied at Cheltenham College and at the age of 16 he took a job learning about steam traction, before starting an apprenticeship at the Kerr Stuart locomotive works in Stoke-on-Trent , where his uncle, Kyrle Willans , was chief development engineer. His uncle bought a wooden horse-drawn narrow flyboat called Cressy and fitted it with a steam engine. Then, having discovered that the steam made steering through tunnels impossible, he replaced the engine with a Ford Model T engine. This was Rolt's introduction to the canal system. After Kerr Stuart went into liquidation in 1930, Rolt became jobless and turned to vintage sports cars, taking part in the veteran run to Brighton , and acquiring a succession of cars including a 1924 Alvis 12/50 two seater "duck's back" that he kept for the rest of his life. [ 4 ] Rolt bought into a motor garage partnership next to the Phoenix public house in Hartley Wintney in Hampshire. Its breakdown vehicle was an adapted 1911 Rolls-Royce Silver Ghost . Together with the landlord of the Phoenix, Tim Carson, and others, Rolt formed the Vintage Sports-Car Club in 1934. He also founded the Prescott hill climb . His 1950 book Horseless Carriage contains a diatribe against the emergence of mass production in the English car industry, claiming that "mass production methods must develop towards the ultimate end [of automatic procreation of machines by machines], although by doing so, they involve either the supersession of men by machines or a continual expansion of production". [ 5 ] His preference for traditional craftsmanship helps to explain his subsequent career. In 1936 Kyrle Willans bought back Cressy , which he had earlier sold, and several trips on the waterways convinced Rolt that he wanted a life afloat. He persuaded Angela Orred to join him in this idyll. She was a young blonde in a white polonecked sweater who had swept into his garage in an Alfa Romeo in 1937 and been caught up in the vintage car scene. Rolt bought Cressy from his uncle and set about converting her into a boat that could be lived on, the most notable addition being a bath. During the summer of 1939 Rolt and Angela decided to defy her father's objections and married in secret on 11 July. Work on Cressy was completed at Tooley's Boatyard in Banbury , and on 27 July Rolt and his wife set off up the Oxford Canal . The outbreak of the Second World War intervened and Rolt, a pacifist at heart, immediately signed up at the Rolls-Royce factory at Crewe to work on the production line of the Spitfire 's Merlin engine. He was saved from the tedium of the production line by the offer of a job in a bell foundry at Aldbourne in Wiltshire. The Rolts headed south in Cressy through storms, reaching Banbury a day before the canals were frozen over for the winter. In March 1940 the Rolts negotiated the River Thames in flood and headed up the River Kennet to reach Hungerford , near Aldbourne. Rolt then worked there for more than a year. The Rolts' first four-month cruise was described in a book that Rolt initially called Painted Ship . He sent the manuscript to several publishers, but it did not find acceptance, as it was felt that there was no market for books about canals. It was not until after a magazine article he wrote came to the attention of the countryside writer H. J. Massingham that Rolt's book was published, in December 1944, under the title Narrow Boat . Narrow Boat had an immediate success with critics and public, leading to fan mail arriving at the Rolts' boat, which was then moored at Tardebigge . Two of the letters were from Robert Aickman and Charles Hadfield , who were both to figure prominently in the next phase of Rolt's life, when he became a campaigner. He invited Aickman and his wife Ray to join the Rolts on Cressy . Aickman later described the journey with the Rolts as "the best time I have ever spent on the waterways". It was on this journey that they decided to form an organisation that a few weeks later, in May 1946, at Aickman's flat in London, was named the Inland Waterways Association , with Aickman as chairman, Hadfield as vice-chairman and Rolt as secretary. The inland waterways of Britain were nationalised in 1947 and faced an uncertain future. The traditional life that Rolt had described seemed to be threatened with extinction. Rolt pioneered direct action on the Stratford-upon-Avon Canal , which stopped British Waterways from closing it; organised a hugely successful Inland Waterways Exhibition, which started in London but toured the country; and proposed the first boat rally at Market Harborough . Aickman, who had a private income, was working full time on the campaign, while Rolt, who had only his writing to support him and was still living aboard Cressy , struggled to meet all his commitments. Eventually he fell out with Aickman over the latter's insistence that every mile of canal should be saved. In early 1951 Rolt was expelled from the organisation he had inspired. By this time he had decided to bring his life on Cressy to an end and return to his family home in Stanley Pontlarge. Angela departed to continue the mobile life, joining Billy Smart 's Circus. A letter Rolt had sent to the Birmingham Post in 1950 resulted in the formation of the Talyllyn Railway Preservation Society , and he now threw himself into its activities, becoming chairman of the company that operated the railway as a tourist attraction. "By the time the fateful letter terminating his IWA membership arrived, he was already busy issuing and stamping passengers' tickets from the little station in Towyn ". [ 6 ] His time at Talyllyn gave rise to his book Railway Adventure (1953), which became the basis for the Ealing comedy film The Titfield Thunderbolt . Rolt married again, to Sonia Smith ( née South), a former actress. During the war she had become one of the amateur boatwomen who worked the canals and had married a boatman. She had been on the council of the IWA. They had two sons, Tim and Dick, and continued to live in Stanley Pontlarge until Rolt's death in 1974. Rolt became a full-time writer in 1939. [ 7 ] The 1950s were Rolt's most prolific time as an author. His best-known works were biographies of Isambard Kingdom Brunel , which stimulated a revival of interest in a forgotten hero, [ 8 ] [ 9 ] George and Robert Stephenson , and Thomas Telford . His classic study of historic railway accidents, Red for Danger , became a textbook for numerous engineering courses. Rolt produced many works about subjects that had not previously been considered the stuff of literature, such as civil engineering , canals and railways. In his later years he produced three volumes of autobiography, only one of which was published during his lifetime. Rolt also published Sleep No More (1948) a collection of supernatural horror stories featuring ghosts , possession and atavism . [ 10 ] These were modelled after the work of M. R. James , but used industrial settings such as railways instead of James' "antiquarian" settings. [ 10 ] [ 11 ] The Penguin Encyclopedia of Horror and the Supernatural described Sleep No More as "An exceptionally original collection of ghost stories ... Rolt had the special talent of combining folkloric spontaneity with artful sophistication." [ 12 ] Several of Rolt's stories were anthologised; they were also adapted as radio dramas. [ 10 ] His "Winterstoke" (1954) is a unique perspective on the development of modern Britain from the Feudal system via the dissolution of the monasteries . Rolt was vice-president of the Newcomen Society , which established a Rolt Prize; [ 13 ] a trustee and member of the Advisory Council of the Science Museum ; a member of the York Railway Museum Committee; an honorary MA of Newcastle ; an honorary MSc of the University of Bath (1973) [ 14 ] and a Fellow of the Royal Society of Literature . He was a joint founder of the Association for Industrial Archaeology , which has an annual Rolt lecture. He helped to form the Ironbridge Gorge Museum Trust . A locomotive Tom Rolt on the Talyllyn Railway , the world's first preserved railway, was named in his memory in 1991. Rolt observed the changes in society resulting from the industrial-scientific revolution. In the epilogue to his biography of Brunel he wrote, two years before C. P. Snow made similar statements about the split between the arts and sciences: Men spoke in one breath of the arts and sciences, and to the man of intelligence and culture it seemed essential that he should keep himself abreast of developments in both spheres. ... So long as the artist or the man of culture had been able to advance shoulder to shoulder with engineer and scientist, and with them see the picture whole, he could share their sense of mastery and confidence, and believe wholeheartedly in material progress. But so soon as science and the arts became divorced, so soon as they ceased to speak a common language, confidence vanished, and doubts and fears came crowding in. He set out these ideas more fully in his book High Horse Riderless , now seen by some as a classic of green philosophy. A bridge (no. 164) on the Oxford Canal in Banbury bears his name (in commemoration of his book Narrow Boat ), as does a centre at the boat museum at Ellesmere Port in Cheshire. A blue plaque to Rolt was unveiled in at Tooley's Boatyard , Banbury on 7 August 2010 as part of the centenary celebrations of his birth. [ 15 ] Rolt's work (arranged by topic in rough chronological order) includes: [ 16 ] From the period of 1958 onwards, Rolt was commissioned by many engineering companies to document their history. Many of these are unpublished internal documents; only the published works are listed here.
https://en.wikipedia.org/wiki/Red_for_Danger
A red giant is a luminous giant star of low or intermediate mass (roughly 0.3–8 solar masses ( M ☉ )) in a late phase of stellar evolution . The outer atmosphere is inflated and tenuous, making the radius large and the surface temperature around 5,000 K [K] (4,700 °C; 8,500 °F) or lower. The appearance of the red giant is from yellow-white to reddish-orange, including the spectral types K and M, sometimes G, but also class S stars and most carbon stars . Red giants vary in the way by which they generate energy: Many of the well-known bright stars are red giants because they are luminous and moderately common. The K0 RGB star Arcturus is 36 light-years away, and Gacrux is the nearest M-class giant at 88 light-years' distance. A red giant will usually produce a planetary nebula and become a white dwarf at the end of its life. A red giant is a star that has exhausted the supply of hydrogen in its core and has begun thermonuclear fusion of hydrogen in a shell surrounding the core. They have radii tens to hundreds of times larger than that of the Sun . However, their outer envelope is lower in temperature, giving them a yellowish-orange hue. Despite the lower energy density of their envelope, red giants are many times more luminous than the Sun because of their great size. Red-giant-branch stars have luminosities up to nearly three thousand times that of the Sun ( L ☉ ); spectral types of K or M have surface temperatures of 3,000–4,000 K (compared with the Sun's photosphere temperature of nearly 6,000 K ) and radii up to about 200 times the Sun ( R ☉ ). Stars on the horizontal branch are hotter, with only a small range of luminosities around 75 L ☉ . Asymptotic-giant-branch stars range from similar luminosities as the brighter stars of the red-giant branch, up to several times more luminous at the end of the thermal pulsing phase. Among the asymptotic-giant-branch stars belong the carbon stars of type C-N and late C-R, produced when carbon and other elements are convected to the surface in what is called a dredge-up . [ 1 ] The first dredge-up occurs during hydrogen shell burning on the red-giant branch, but does not produce a large carbon abundance at the surface. The second, and sometimes third, dredge-up occurs during helium shell burning on the asymptotic-giant branch and convects carbon to the surface in sufficiently massive stars. The stellar limb of a red giant is not sharply defined, contrary to their depiction in many illustrations. Rather, due to the very low mass density of the envelope, such stars lack a well-defined photosphere , and the body of the star gradually transitions into a ' corona '. [ 2 ] The coolest red giants have complex spectra, with molecular lines , emission features, and sometimes masers , particularly from thermally pulsing AGB stars. [ 3 ] Observations have also provided evidence of a hot chromosphere above the photosphere of red giants, [ 4 ] [ 5 ] [ 6 ] where investigating the heating mechanisms for the chromospheres to form requires 3D simulations of red giants. [ 7 ] Another noteworthy feature of red giants is that, unlike Sun-like stars whose photospheres have a large number of small convection cells ( solar granules ), red-giant photospheres, as well as those of red supergiants , have just a few large cells, the features of which cause the variations of brightness so common on both types of stars. [ 8 ] Red giants are evolved from main-sequence stars with masses in the range from about 0.3 M ☉ to around 8 M ☉ . [ 9 ] When a star initially forms from a collapsing molecular cloud in the interstellar medium , it contains primarily hydrogen and helium, with trace amounts of " metals " (in astrophysics, this refers to all elements other than hydrogen and helium). These elements are all uniformly mixed throughout the star. The star "enters" the main sequence when its core reaches a temperature (several million kelvins ) high enough to begin fusing hydrogen-1 (the predominant isotope), and establishes hydrostatic equilibrium . (In astrophysics, stellar fusion is often referred to as "burning", with hydrogen fusion sometimes termed " hydrogen burning ".) Over its main sequence life, the star slowly fuses the hydrogen in the core into helium; its main-sequence life ends when nearly all the hydrogen in the core has been fused. For the Sun, the main-sequence lifetime is approximately 10 billion years. More massive stars burn their fuel disproportionately faster and so have a shorter lifetime than less massive stars. [ 10 ] When the star has mostly exhausted the hydrogen fuel in its core, the core's rate of nuclear reactions declines, and thus so do the radiation and thermal pressure the core generates, which are what support the star against gravitational contraction . The star further contracts, increasing the pressures and thus temperatures inside the star (as described by the ideal gas law ). Eventually a "shell" layer around the core reaches temperatures sufficient to fuse hydrogen and thus generate its own radiation and thermal pressure, which "re-inflates" the star's outer layers and causes them to expand. [ 11 ] The hydrogen-burning shell results in a situation that has been described as the mirror principle : when the core within the shell contracts, the layers of the star outside the shell must expand. The detailed physical processes that cause this are complex. Still, the behavior is necessary to satisfy simultaneous conservation of gravitational and thermal energy in a star with the shell structure. The core contracts and heats up due to the lack of fusion, and so the outer layers of the star expand greatly, absorbing most of the extra energy from shell fusion. This process of cooling and expanding is the subgiant stage. When the envelope of the star cools sufficiently it becomes convective , the star stops expanding, its luminosity starts to increase, and the star is ascending the red-giant branch of the Hertzsprung–Russell (H–R) diagram . [ 10 ] [ 12 ] The evolutionary path the star takes as it moves along the red-giant branch depends on the mass of the star. For the Sun and stars of less than about 2 M ☉ [ 13 ] the core will become dense enough that electron degeneracy pressure will prevent it from collapsing further. Once the core is degenerate , it will continue to heat until it reaches a temperature of roughly 1 × 10 8 K , hot enough to begin fusing helium to carbon via the triple-alpha process . Once the degenerate core reaches this temperature, the entire core will begin helium fusion nearly simultaneously in a so-called helium flash . In more-massive stars, the collapsing core will reach these temperatures before it is dense enough to be degenerate, so helium fusion will begin much more smoothly, and produce no helium flash. [ 10 ] The core helium fusing phase of a star's life is called the horizontal branch in metal-poor stars , so named because these stars lie on a nearly horizontal line in the H–R diagram of many star clusters. Metal-rich helium-fusing stars instead lie on the so-called red clump in the H–R diagram. [ 14 ] An analogous process occurs when the core helium is exhausted, and the star collapses once again, causing helium in a shell to begin fusing. At the same time, hydrogen may begin fusion in a shell just outside the burning helium shell. This puts the star onto the asymptotic giant branch , a second red-giant phase. [ 15 ] The helium fusion results in the build-up of a carbon–oxygen core. A star below about 8 M ☉ will never start fusion in its degenerate carbon–oxygen core. [ 13 ] Instead, at the end of the asymptotic-giant-branch phase the star will eject its outer layers, forming a planetary nebula with the core of the star exposed, ultimately becoming a white dwarf . The ejection of the outer mass and the creation of a planetary nebula finally ends the red-giant phase of the star's evolution. [ 10 ] The red-giant phase typically lasts only around a billion years in total for a solar mass star, almost all of which is spent on the red-giant branch. The horizontal-branch and asymptotic-giant-branch phases proceed tens of times faster. If the star has about 0.2 to 0.5 M ☉ , [ 13 ] red dwarfs earlier than about M5V , it is massive enough to become a red giant but does not have enough mass to initiate the fusion of helium. [ 9 ] These "intermediate" stars cool somewhat and increase their luminosity but never achieve the tip of the red-giant branch and helium core flash. When the ascent of the red-giant branch ends they puff off their outer layers much like a post-asymptotic-giant-branch star and then become a white dwarf. Very-low-mass stars are fully convective [ 16 ] [ 17 ] and may continue to fuse hydrogen into helium for up to a trillion years [ 18 ] until only a small fraction of the entire star is hydrogen. Luminosity and temperature steadily increase during this time, just as for more-massive main-sequence stars, but the length of time involved means that the temperature eventually increases by about 50% and the luminosity by around 10 times. Eventually the level of helium increases to the point where the star ceases to be fully convective and the remaining hydrogen locked in the core is consumed in only a few billion more years. Depending on mass, the temperature and luminosity continue to increase for a time during hydrogen shell burning, the star can become hotter than the Sun and tens of times more luminous than when it formed although still not as luminous as the Sun. After some billions more years, they start to become less luminous and cooler even though hydrogen shell burning continues. These become cool helium white dwarfs. [ 9 ] Very-high-mass stars develop into supergiants that follow an evolutionary track that takes them back and forth horizontally over the H–R diagram, at the right end constituting red supergiants . These usually end their life as a type II supernova . The most massive stars can become Wolf–Rayet stars without becoming giants or supergiants at all. [ 19 ] [ 20 ] Although traditionally it has been suggested the evolution of a star into a red giant will render its planetary system , if present, uninhabitable, some research suggests that, during the evolution of a 1 M ☉ star along the red-giant branch, it could harbor a habitable zone for several billion years at 2 astronomical units (AU) out to around 100 million years at 9 AU out, giving perhaps enough time for life to develop on a suitable world. After the red-giant stage, there would for such a star be a habitable zone between 7 and 22 AU for an additional one billion years. [ 21 ] Later studies have refined this scenario, showing how for a 1 M ☉ star the habitable zone lasts from 100 million years for a planet with an orbit similar to that of Mars to 210 million years for one that orbits at Saturn 's distance to the Sun, the maximum time (370 million years) corresponding for planets orbiting at the distance of Jupiter . However, planets orbiting a 0.5 M ☉ star in equivalent orbits to those of Jupiter and Saturn would be in the habitable zone for 5.8 billion years and 2.1 billion years, respectively; for stars more massive than the Sun, the times are considerably shorter. [ 22 ] As of 2023, several hundred giant planets have been discovered around giant stars. [ 23 ] However, these giant planets are more massive than the giant planets found around solar-type stars. This could be because giant stars are more massive than the Sun (less massive stars will still be on the main sequence and will not have become giants yet) and more massive stars are expected to have more massive planets. However, the masses of the planets that have been found around giant stars do not correlate with the masses of the stars; therefore, the planets could be growing in mass during the stars' red giant phase. The growth in planet mass could be partly due to accretion from stellar wind, although a much larger effect would be Roche lobe overflow causing mass-transfer from the star to the planet when the giant expands out to the orbital distance of the planet. [ 24 ] (A similar process in multiple star systems is believed to be the cause of most novas and type Ia supernovas .) Many of the well-known bright stars are red giants, because they are luminous and moderately common. The red-giant branch variable star Gamma Crucis is the nearest M-class giant star at 88 light-years. [ 25 ] The K1.5 red-giant branch star Arcturus is 36 light-years away. [ 26 ] The Sun will exit the main sequence in approximately 5 billion years and start to turn into a red giant. [ 29 ] [ 30 ] As a red giant, the Sun will grow so large (over 200 times its present-day radius : ~ 215 R ☉ ; ~ 1 AU ) that it will engulf Mercury , Venus , and likely Earth. It will lose 38% of its mass growing, then will die into a white dwarf . [ 31 ] Media related to Red giants at Wikimedia Commons
https://en.wikipedia.org/wiki/Red_giant
In blacksmithing , red heat is the practice of using colours to determine the temperature of a piece of metal (usually iron or steel ). Long before thermometers were widely available, it was necessary to know what state the metal was in for heat treating it, and the only way to do this was to heat it up to a colour which was known to be best for the work. According to Chapman's Workshop Technology , the colours which can be observed in steel are: [ 1 ] In 1905, Stirling Consolidated Boiler Company published a slightly different set of values: [ 2 ] This metalworking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Red_heat
Red mercury is a discredited substance , most likely a hoax perpetrated by con artists who sought to take advantage of gullible buyers on the black market for arms . [ 1 ] These con artists described it as a substance used in the creation of nuclear weapons ; because of the secrecy surrounding nuclear weapons development, it is difficult to disprove their claims completely. However, all samples of alleged "red mercury" analyzed in the public literature have proven to be well-known, common substances of no interest to weapons makers. [ 2 ] [ 3 ] References to red mercury first appeared in major Soviet and western media sources in the late 1980s. The articles were never specific as to what exactly red mercury was, but nevertheless claimed it was of great importance in nuclear bombs, or that it was used in the building of boosted fission weapons . Almost as soon as the stories appeared, people started attempting to buy it. At that point, the purported nature of the substance started to change, and eventually turned into anything the buyer happened to be interested in. As New Scientist reported in 1992, a Lawrence Livermore National Laboratory report outlined that: When red mercury first appeared on the international black market 15 years ago, the supposedly top secret nuclear material was 'red' because it came from Russia. When it resurfaced last year in the formerly communist states of Eastern Europe it had unaccountably acquired a red colour. But then, as a report from the US Department of Energy reveals, mysterious transformations are red mercury's stock in trade. The report, compiled by researchers at the Los Alamos National Laboratory, shows that in the hands of hoaxers and conmen, red mercury can do almost anything the aspiring Third World demagogue wants it to. You want a short cut to making an atom bomb? You want the key to Soviet ballistic missile guidance systems? Or perhaps you want the Russian alternative to the anti-radar paint on the stealth bomber? What you need is red mercury. [ 4 ] A 1993 article in the Russian newspaper Pravda , claiming to be informed by leaked top-secret memos, described red mercury as: [A] super-conductive material used for producing high-precision conventional and nuclear bomb explosives, 'Stealth' surfaces and self-guided warheads. Primary end-users are major aerospace and nuclear-industry companies in the United States and France along with nations aspiring to join the nuclear club, such as South Africa, Israel, Iran, Iraq, and Libya. [ 5 ] Two TV documentaries about red mercury were made by British Channel 4 television , airing in 1993 and 1994; Trail of Red Mercury and Pocket Neutron , which claimed to have "startling evidence that Russian scientists have designed a miniature neutron bomb using a mysterious compound called red mercury". [ 6 ] Samuel T. Cohen , an American physicist who worked on building the atomic bomb, said in his autobiography that red mercury is manufactured by "mixing special nuclear materials in very small amounts into the ordinary compound and then inserting the mixture into a nuclear reactor or bombarding it with a particle-accelerator beam." When detonated, this mixture allegedly becomes "extremely hot, which allows pressures and temperatures to be built up that are capable of igniting the heavy hydrogen and producing a pure-fusion mini neutron bomb." [ 6 ] Red mercury was offered for sale throughout Europe and the Middle East by Russian businessmen, who found many buyers who would pay almost anything for the substance, even though they had no idea what it was. A study for the Bulletin of the Atomic Scientists published in 1997 has perhaps the most factual summary of red mercury: The asking price for red mercury ranged from $100,000 to $300,000 per kilogram. Sometimes the material would be irradiated or shipped in containers with radioactive symbols, perhaps to convince potential buyers of its strategic value. But samples seized by police contained only mercury(II) oxide , mercury(II) iodide , or mercury mixed with red dye – hardly materials of interest to weapons-makers. [ 2 ] Following the arrest of several men in Britain in September 2004, on suspicion that they were trying to buy a kilogram of red mercury for £900,000, the International Atomic Energy Agency made a statement dismissing claims that the substance is real. "Red mercury doesn't exist," said the spokesman. "The whole thing is a bunch of malarkey ." [ 7 ] When the case came to trial at the Old Bailey in April 2006, it became apparent that News of the World ' s "fake sheikh" Mazher Mahmood had worked with the police to catch the three men (Dominic Martins, Roque Fernandes and Abdurahman Kanyare). They were tried for "trying to set up funding or property for terrorism" and "having an article (a highly dangerous mercury-based substance) for terrorism". According to the prosecutor, red mercury was believed to be a material which could cause a large explosion, possibly even a nuclear reaction, and whether or not red mercury actually existed was irrelevant to the prosecution. [ 8 ] All three men were acquitted in July 2006. [ 9 ] [ 10 ] Several common mercury compounds are indeed red, such as mercury sulfide (from which the bright-red pigment vermilion was originally derived), mercury(II) oxide (historically called red precipitate ), and mercury(II) iodide , and others are explosive, such as mercury(II) fulminate . No use for any of these compounds in nuclear weapons has been publicly documented. "Red mercury" could also be a code name for a substance that contains no mercury at all. A variety of different items have been chemically analyzed as putative samples of "red mercury" since the substance first came to the attention of the media, but no single substance was found in these items. A sample of radioactive material was seized by German police in May 1994. This consisted of a complex mixture of elements, including about 10% by weight plutonium , with the remainder consisting of 61% mercury , 11% antimony , 6% oxygen , 2% iodine and 1.6% gallium . [ 11 ] The reason why somebody had assembled this complex mixture of chemicals is unknown; equally puzzling was the presence of fragments of glass and brush bristles, suggesting that someone had dropped a bottle of this substance and then swept it up into a new container. [ 12 ] In contrast, an analysis reported in 1998 of a different "red mercury" sample concluded that this sample was a non-radioactive mixture of elemental mercury, water and mercury(II) iodide, which is a red colored chemical. [ 1 ] Similarly, another analysis of a sample recovered in Zagreb in November 2003 reported that this item contained only mercury. [ 13 ] One formula that had been claimed previously for red mercury was Hg 2 Sb 2 O 7 ( mercury(II) pyroantimonate ), but no antimony was detected in this 2003 sample. [ 13 ] [ 14 ] Red mercury was described by many commentators, [ who? ] and the exact nature of its supposed working mechanism varied widely among them. In general, however, none of these explanations appear to be scientifically or historically supportable. Traditional staged thermonuclear weapons consist of two parts, a fission "primary" and a fusion/fission "secondary". The energy released by the primary when it explodes is used to (indirectly) compress the secondary and start a fusion reaction within it. Conventional explosives are far too weak to provide the level of compression needed. The primary is generally built as small as possible, because the energy released by the secondary is much larger, and thus building a larger primary is generally inefficient. There is a lower limit on the size of the primary, known as the critical mass . For weapons grade plutonium , this is around 10 kg (22 lb). This can be reduced through the use of neutron reflectors or clever arrangements of explosives to compress the core, but these methods generally add to the size and complexity of the resulting device. Because of the need for a fission primary and the difficulty of purifying weapons-grade fissile materials, the majority of arms control efforts to limit nuclear proliferation rely on the detection and control of the fissile material and the equipment needed to obtain it. A theory popular in the mid-1990s was that red mercury facilitated the enrichment of uranium to weapons-grade purity. Conventionally, such enrichment is usually done with Zippe-type centrifuges , and takes several years. Red mercury was speculated [ who? ] to eliminate this costly and time-consuming step. Although this would not eliminate the possibility of detecting the material, it could escape detection during enrichment as the facilities hosting centrifuges normally used in this process are very large and require equipment that can be fairly easily tracked internationally. Eliminating such equipment would in theory greatly ease the construction of a clandestine nuclear weapon. A key part of the secondary in a fusion bomb is lithium-6 -deuteride. When irradiated with high-energy neutrons , Li-6 creates tritium , which mixes with the deuterium in the same mixture and fuses at a relatively low temperature. Russian weapon designers have reported (1993) that red mercury was the Soviet codename for lithium-6, which has an affinity for mercury and tends to acquire a red colour due to mercuric impurities during its separation process. [ 15 ] [ 16 ] Samuel T. Cohen , the "father of the neutron bomb ", claimed for a long time that red mercury is a powerful explosive-like chemical known as a ballotechnic . The energy released during its reaction is allegedly enough to directly compress the secondary without the need for a fission primary in a thermonuclear weapon . He claimed that he learned that the Soviet scientists perfected the use of red mercury and used it to produce a number of softball -sized pure fusion bombs weighing as little as 10 lb (4.5 kg), which he claimed were made in large numbers. [ 17 ] He went on to claim that the reason this is not more widely known is that elements within the US power structure are deliberately suppressing or hiding information due to the frightening implications such a weapon would have on nuclear proliferation. Since a red mercury bomb would require no fissile material, it would seemingly be impossible to protect against its widespread proliferation given current arms control methodologies. Instead of trying to do so, they simply claim it does not exist, while acknowledging its existence privately. Cohen also claimed that when President Boris Yeltsin took power, he secretly authorized the sale of red mercury on the international market, and that fake versions of it were sometimes offered to gullible buyers. [ 17 ] Critics argue Cohen's claims are difficult to support scientifically. The amount of energy released by the fission primary is thousands of times greater than that released by conventional explosives, and it appears [ who? ] that the "red mercury" approach would be orders of magnitude smaller than required. Additionally, it appears there is no independent confirmation of any sort of Cohen's claims to the reality of red mercury. The scientists [ who? ] in charge of the labs where the material would have been made have publicly dismissed the claims (see below), as have numerous US colleagues, including Edward Teller . According to Cohen, [ 17 ] veteran nuclear weapon designer Frank Barnaby conducted secret interviews with Russian scientists who told him that red mercury was produced by dissolving mercury antimony oxide in mercury, heating and irradiating the resultant amalgam , and then removing the elemental mercury through evaporation. [ 18 ] The irradiation was reportedly carried out by placing the substance inside a nuclear reactor. [ 7 ] As mentioned earlier, one of the origins of the term "red mercury" was in the Russian newspaper Pravda , which claimed that red mercury was "a super-conductive material used for producing high-precision conventional and nuclear bomb explosives, 'stealth' surfaces and self-guided warheads." [ 5 ] Any substance with these sorts of highly differing properties would be suspect to most, but the stealth story continued to have some traction long after most had dismissed the entire story. Red mercury is thought by some to be the invention of an intelligence agency or criminal gang for the purpose of deceiving terrorists and rogue states who were trying to acquire nuclear technology on the black market. [ 19 ] One televised report indicated that the Soviet Union encouraged the KGB and GRU to arrange sting operations for the detection of those seeking to deal in nuclear materials. The Soviet intelligence services allegedly created a myth of the necessity of "red mercury" for the sorts of nuclear devices that terrorists and rogue governments might seek. [ 20 ] Political entities that already had nuclear weapons did nothing to debunk the myth. In 1999 Jane's Intelligence Review suggested that victims of red mercury scams may have included Osama bin Laden . [ 6 ] Organizations involved in landmine clearance and unexploded munitions disposal noted a belief amongst some communities in southern Africa that red mercury may be found in certain types of ordnance. Attempting to extract red mercury, purported to be highly valuable, was reported as a motivation for people dismantling items of unexploded ordnance, and suffering death or injury as a result. In some cases it was reported that unscrupulous traders may be deliberately promoting this misconception in an effort to build a market for recovered ordnance. [ 21 ] An explosion in Chitungwiza , Zimbabwe, that killed five people is attributed to a quest to reclaim red mercury from a live landmine. [ 22 ] In April 2009 it was reported from Saudi Arabia that rumors that Singer sewing machines contained "red mercury" had caused the prices of such machines to massively increase in the Kingdom, with some paying up to SR 200,000 for a single machine which could previously have been bought for SR200. [ 23 ] Believers in the rumor claimed that the presence of red mercury in the sewing machines' needles could be detected using a mobile telephone; if the line cut off when the telephone was placed near to the needle, this supposedly proved that the substance was present. In Medina there was a busy trade in the sewing machines, with buyers seen using mobile phones to check the machines for red mercury content, while it was reported that others had resorted to theft, with two tailors' shops in Dhulum broken into and their sewing machines stolen. At other locales, there were rumors that a Kuwait-based multinational had been buying up the Singer machines, while in Al-Jouf, the residents were led to believe that a local museum was buying up any such machines that it could find, and numerous women appeared at the museum offering to sell their Singer machines. [ 24 ] There was little agreement among believers in the story as to the exact nature or even color of the red mercury, while the supposed uses for it ranged from it being an essential component of nuclear power, to having the ability to summon jinn , extract gold, or locate buried treasure and perform other forms of magic. These beliefs in the supernatural properties of red mercury are rooted in medieval Islamic conceptions of the alchemical properties of mercury. The official spokesman for the Riyadh police said that the rumors had been started by gangs attempting to swindle people out of their money, and denied the existence of red mercury in sewing machines. [ 24 ]
https://en.wikipedia.org/wiki/Red_mercury
Red mud , now more frequently termed bauxite residue , is an industrial waste generated during the processing of bauxite into alumina using the Bayer process . It is composed of various oxide compounds, including the iron oxides which give its red colour. Over 97% of the alumina produced globally is through the Bayer process; for every tonne (2,200 lb) of alumina produced, approximately 1 to 1.5 tonnes (2,200 to 3,300 lb) of red mud are also produced; the global average is 1.23. Annual production of alumina in 2023 was over 142 million tonnes (310 billion pounds) resulting in the generation of approximately 170 million tonnes (370 billion pounds) of red mud. [ 1 ] Due to this high level of production and the material's high alkalinity , if not stored properly, it can pose a significant environmental hazard. As a result, significant effort is being invested in finding better methods for safe storage and dealing with it such as waste valorization in order to create useful materials for cement and concrete . [ 2 ] Less commonly, this material is also known as bauxite tailings , red sludge , or alumina refinery residues . Increasingly, the name processed bauxite is being adopted, especially when used in cement applications. Red mud is a side-product of the Bayer process, the principal means of refining bauxite en route to alumina. The resulting alumina is the raw material for producing aluminium by the Hall–Héroult process . [ 3 ] A typical bauxite plant produces one to two times as much red mud as alumina. This ratio is dependent on the type of bauxite used in the refining process and the extraction conditions. [ 4 ] More than 60 manufacturing operations across the world use the Bayer process to make alumina from bauxite ore. [ citation needed ] Bauxite ore is mined, normally in open cast mines , and transferred to an alumina refinery for processing. The alumina is extracted using sodium hydroxide under conditions of high temperature and pressure. The insoluble part of the bauxite (the residue) is removed, giving rise to a solution of sodium aluminate , which is then seeded with an aluminium hydroxide crystal and allowed to cool which causes the remaining aluminium hydroxide to precipitate from the solution. Some of the aluminium hydroxide is used to seed the next batch, while the remainder is calcined (heated) at over 1,000 °C (1,830 °F) in rotary kilns or fluid flash calciners to produce aluminium oxide (alumina). The alumina content of the bauxite used is normally between 42 and 50%, but ores with a wide range of alumina contents can be used. The aluminium compound may be present as gibbsite (Al(OH) 3 ), boehmite (γ-AlO(OH)) or diaspore (α-AlO(OH)). The residue invariably has a high concentration of iron oxide which gives the product a characteristic red colour. A small residual amount of the sodium hydroxide used in the process remains with the residue, causing the material to have a high pH/alkalinity, normally above 12. Various stages of solid/liquid separation processes recycle as much sodium hydroxide as possible from the residue back into the Bayer Process, in order to reduce production costs and make the process as efficient as possible. This also lowers the final alkalinity of the residue, making it easier and safer to handle and store. Red mud is composed of a mixture of solid and metallic oxides. The red colour arises from iron oxides , which can comprise up to 60% of the mass. The mud is highly basic with a pH ranging from 10 to 13. [ 3 ] [ 4 ] [ 5 ] In addition to iron, the other dominant components include silica , unleached residual aluminium compounds, and titanium oxide . [ 6 ] The main constituents of the residue after the extraction of the aluminium component are insoluble metallic oxides. The percentage of these oxides produced by a particular alumina refinery will depend on the quality and nature of the bauxite ore and the extraction conditions. The table below shows the composition ranges for common chemical constituents, but the values vary widely: Mineralogically expressed the components present are: In general, the composition of the residue reflects that of the non-aluminium components, with the exception of part of the silicon component: crystalline silica (quartz) will not react but some of the silica present, often termed, reactive silica, will react under the extraction conditions and form sodium aluminium silicate as well as other related compounds. Discharge of red mud can be hazardous environmentally because of its alkalinity and species components. Until 1972, Italian company Montedison was discharging red mud off the coast of Corsica . [ 7 ] The case is important in international law governing the Mediterranean sea . [ 8 ] In October 2010, approximately one million cubic metres (35 million cubic feet) of red mud slurry from an alumina plant near Kolontár in Hungary was accidentally released into the surrounding countryside in the Ajka alumina plant accident , killing ten people and contaminating a large area. [ 9 ] All life in the Marcal river was said to have been "extinguished" by the red mud, and within days the mud had reached the Danube . [ 10 ] The long-term environmental effects of the spill have been minor after a € 127 million remediation effort by the Hungarian government. [ 11 ] Residue storage methods have changed substantially since the original plants were built. The practice in early years was to pump the slurry, at a concentration of about 20% solids, into lagoons or ponds sometimes created in former bauxite mines or depleted quarries. In other cases, impoundments were constructed with dams or levees , while for some operations valleys were dammed and the residue deposited in these holding areas. [ 12 ] It was once common practice for the red mud to be discharged into rivers, estuaries, or the sea via pipelines or barges; in other instances the residue was shipped out to sea and disposed of in deep ocean trenches many kilometres offshore. From 2016, all disposal into the sea, estuaries and rivers was stopped. [ 13 ] As residue storage space ran out and concern increased over wet storage, since the mid-1980s dry stacking has been increasingly adopted. [ 14 ] [ 15 ] [ 16 ] [ 17 ] In this method, residues are thickened to a high density slurry (48–55% solids or higher), and then deposited in a way that it consolidates and dries. [ 18 ] An increasingly popular treatment process is filtration whereby a filter cake (typically resulting in 23–27% moisture) is produced. This cake can be washed with either water or steam to reduce alkalinity before being transported and stored as a semi-dried material. [ 19 ] Residue produced in this form is ideal for reuse as it has lower alkalinity, is cheaper to transport, and is easier to handle and process. Another option for ensuring safe storage is to use amphirols to dewater the material once deposited and then 'conditioned' using farming equipment such as harrows to accelerate carbonation and thereby reduce the alkalinity. Bauxite residue produced after press filtration and 'conditioning as described above are classified as non-hazardous under the EU Waste Framework Directive. In 2013 Vedanta Aluminium , Ltd. commissioned a red mud powder-producing unit at its Lanjigarh refinery in Odisha , India , describing it as the first of its kind in the alumina industry, tackling major environmental hazards. [ 20 ] Since the Bayer process was first adopted industrially in 1894, the value of the remaining oxides has been recognized. Attempts have been made to recover the principal components – especially the iron oxides . Since bauxite mining began, a large amount of research effort has been devoted to seeking uses for the residue. Many studies are now being financed by the European Union under the Horizon Europe programme. [ citation needed ] Several studies have been conducted to develop uses of red mud. [ 21 ] An estimated 3 to 4 million tonnes (6.6 to 8.8 billion pounds) are used annually in the production of cement, [ 22 ] road construction [ 23 ] and as a source for iron. [ 3 ] [ 4 ] [ 5 ] Potential applications include the production of low cost concrete, [ 24 ] application to sandy soils to improve phosphorus cycling , amelioration of soil acidity , landfill capping and carbon sequestration . [ 25 ] [ 26 ] Reviews describing the current use of bauxite residue in Portland cement clinker , supplementary cementious materials/blended cements and special calcium aluminate cements (CAC) and calcium sulfo-aluminate (CSA) cements have been extensively researched and documented. [ 27 ] In 2015, a major initiative was launched in Europe with funds from the European Union to address the valorization of red mud. Some 15 PhD students were recruited as part the European Training Network (ETN) for Zero-Waste Valorisation of Bauxite Residue. [ 32 ] The key focus will be the recovery of iron, aluminium, titanium and rare-earth elements (including scandium ) while valorising the residue into building materials. A European Innovation Partnership has been formed to explore options for using by-products from the aluminium industry, BRAVO (Bauxite Residue and Aluminium Valorisation Operations). This sought to bring together industry with researchers and stakeholders to explore the best available technologies to recover critical raw materials but has not proceeded. Additionally, EU funding of approximately €11.5 million has been allocated to a four-year programme starting in May 2018 looking at uses of bauxite residue with other wastes, RemovAL. A particular focus of this project is the installation of pilot plants to evaluate some of the interesting technologies from previous laboratory studies. As part of the H2020 project RemovAl, it is planned to erect a house in the Aspra Spitia area of Greece that will be made entirely out of materials from bauxite residue. Other EU funded projects that have involved bauxite residue and waste recovery have been ENEXAL (ENergy-EXergy of ALuminium industry) [2010–2014], EURARE (European Rare earth resources) [2013–2017] and three more recent projects are ENSUREAL (ENsuring SUstainable ALumina production) [2017–2021], SIDEREWIN (Sustainable Electro-winning of Iron) [2017–2022] and SCALE (SCandium – ALuminium in Europe) [2016–2020] a €7 million project to look at the recovery of scandium from bauxite residue. In 2020, the International Aluminium Institute, launched a roadmap for maximising the use of bauxite residue in cement and concrete. [ 33 ] [ 34 ] In November 2020, The ReActiv: Industrial Residue Activation for Sustainable Cement Production research project was launched, this is being funded by the EU. One of the world's largest cement companies, Holcim , in cooperation with 20 partners across 12 European countries, launched the ambitious 4-year ReActiv project (reactivproject.eu). The ReActiv project will create a novel sustainable symbiotic value chain, linking the by-product of the alumina production industry and the cement production industry. In ReActiv modification will be made to both the alumina production and the cement production side of the chain, in order to link them through the new ReActiv technologies. The latter will modify the properties of the industrial residue, transforming it into a reactive material (with pozzolanic or hydraulic activity) suitable for new, low CO 2 footprint, cement products. In this manner ReActiv proposes a win-win scenario for both industrial sectors (reducing wastes and CO 2 emissions respectively). Fluorchemie GmbH have developed a new flame-retardant additive from bauxite residue, the product is termed MKRS (modified re-carbonised red mud) with the trademark ALFERROCK(R) and has potential applicability in a wide range of polymers (PCT WO2014/000014). One of its particular benefits is the ability to operate over a much broader temperature range, 220–350 °C (428–662 °F), that alternative zero halogen inorganic flame retardants such as aluminium hydroxide, boehmite or magnesium hydroxide . In addition to polymer systems where aluminium hydroxide or magnesium hydroxide can be used, it has also found to be effective in foamed polymers such as EPS and PUR foams at loadings up to 60%. In a suitable compact solid form, with a density of approximately 3.93 grams per cubic centimetre (0.142 lb/cu in), ALFERROCK produced by the calcination of bauxite residues, has been found to be very effective as a thermal energy storage medium (WO2017/157664). The material can repeatedly be heated and cooled without deterioration and has a specific thermal capacity in the range of 0.6 – 0.8 kJ/(kg·K) at 20 °C (68 °F) and 0.9 – 1.3 kJ/(kg·K) at 726 °C (1,339 °F); this enables the material to work effectively in energy storage device to maximise the benefits of solar power, wind turbines and hydro-electric systems. High strength geopolymers have been developed from red mud. [ 35 ] Sustainable Approach to Low-Grade Bauxite Processing The IB2 process is a French technology developed to enhance the extraction of alumina from bauxite, especially low-grade bauxite. This method aims to boost alumina production efficiency while decreasing the environmental impacts typically linked with this process, notably the generation of red mud and carbon dioxide emissions. The IB2 technology, patented in 2019, [ 36 ] is the outcome of a decade of research and development efforts by Yves Occello, a former Pechiney chemist. This process improves the traditional Bayer process, which has been utilized for more than a century to extract alumina from bauxite. It presents a significant decrease in caustic soda consumption and a notable reduction in red mud output, thereby minimizing hazardous waste and environmental risks. In addition to reducing red mud production, the IB2 process aids in lowering CO 2 emissions, primarily through the optimized treatment of low-grade bauxite. By limiting the necessity to import high-grade bauxite, this process reduces the carbon footprint associated with ore transportation. Furthermore, the process yields a byproduct that can be utilized in the production of eco-friendly cements, promoting the concept of a circular economy. [ 37 ] The inventor of the technology is chemist Yves Occello, who founded the company IB2 with Romain Girbal in 2017.
https://en.wikipedia.org/wiki/Red_mud
A "red neuron" (acidophilic or "eosinophilic" neuron) is a pathological finding in neurons , generally of the central nervous system , indicative of acute neuronal injury and subsequent apoptosis or necrosis . Acidophilic neurons are often found in the first 12–24 hours after an ischemic injury such as a stroke. Since neurons are permanent cells, they are most susceptible to hypoxic injury. The red coloration is due to pyknosis or degradation of the nucleus and loss of Nissl bodies which are normally stained blue ( basophilic ) on hematoxylin & eosin staining ( H&E stain ). This leaves only the degraded proteins which stains red ( eosinophilic ). Acidophilic neurons also can be stained with acidic dyes other than eosin (e.g. acid fuchsin and light green yellowish). [ 1 ] This article related to pathology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Red_neuron
Red phosphorus is an allotrope of phosphorus . It is an amorphous polymeric red solid that is stable in air. It can be easily converted from white phosphorus under light or heating. It finds applications as matches and fire retardants . It was discovered in 1847 by Anton von Schrötter. [ 1 ] Red phosphorus is an amorphous form of phosphorus. Crystalline forms of red phosphorus include Hittorf's phosphorus and fibrous red phosphorus. The structure of red phosphorus contains the fragments illustrated below: [ 2 ] [ 3 ] One method of preparing red phosphorus involves heating white phosphorus in an inert atmosphere like nitrogen or carbon dioxide, with iodine as catalyst. [ 4 ] Another theoretically possible method of red phosphorus production is via light irradiation of white phosphorus. [ 5 ] However, it has not been used industrially, likely due to the suspicious quality and unidentified structure the product. [ 6 ] [ better source needed ] Under standard conditions, red phosphorus is more stable than white phosphorus, but less stable than the thermodynamically stable black phosphorus. The standard enthalpy of formation of red phosphorus is −17.6 kJ/mol. [ 3 ] Red phosphorus is kinetically most stable. Being polymeric , red phosphorus is insoluble in solvents . It shows semiconductor properties. [ 7 ] Due to such a kinetic stability, red phosphorus doesn't spontaneously ignite in air. It doesn't disproportionate in the presence of alkali, and is less reactive towards halogens , sulfur , and metals compared with white phosphorus . [ 7 ] Red phosphorus can be used as a flame retardant in resins . Its mechanism of action involves the formation of polyphosphoric acid (the hydrogen atoms are from the resin) and char , which prevents flame propagation. [ 8 ] However, for electronic/electrical systems, red phosphorus flame retardant has been effectively banned by major OEMs due to its tendency to induce premature failures. [ 9 ] One persistent problem is that red phosphorus in epoxy molding compounds induces elevated leakage current in semiconductor devices. [ 10 ] Another problem was acceleration of hydrolysis reactions in PBT insulating material. [ 11 ] Red phosphorus is used, along with abrasives , on the strike pads of modern safety matches . The match head, containing potassium chlorate, will ignite upon friction with the strike pad. [ 12 ] However, the red color of the matchhead is due to addition of red dyes, and has nothing to do with red phosphorus content. [ 13 ] Red phosphorus reacts with bromine and iodine to form phosphorus tribromide [ 14 ] [ 15 ] and phosphorus triiodide . Both are useful as halogenation agents, like replacing the hydroxyl group of alcohols. Phosphorus triiodide can also be used to produce hydroiodic acid after hydrolysis . This reaction is notable in the illicit production of methamphetamine and Krokodil , where hydrogen iodide acts as a reducing agent . [ 16 ] Red phosphorus is often used to prepare chemicals where the P -P bond is retained. Upon room temperature action with sodium chlorite , Na 2 H 2 P 2 O 6 is formed. [ 17 ] Red phosphorus can be used as an elemental photocatalyst for hydrogen formation from the water. [ 18 ] [ 7 ] It has also been researched as a sodium ion battery anode. [ 19 ] [ 2 ] Hittorf's phosphorus, or violet phosphorus, is one of the crystalline forms of red phosphorus. [ 20 ] [ 7 ] It adopts the following structure: Violet phosphorus can be prepared by sublimation of red phosphorus in a vacuum , in the presence of an iodine catalyst. [ 7 ] It is chemically similar to red phosphorus. There are, however, subtle differences. Violet phosphorus ignites upon impact in air, while red phosphorus is impact stable. Violet phosphorus doesn't ignite in the presence of air upon room temperature contact with bromine , unlike red phosphorus. [ citation needed ] The reaction of red phosphorus and bromine alone does not generate a flame. [ citation needed ] Fibrous red phosphorus is another crystalline form of red phosphorus. [ 7 ] It is obtained along with violet phosphorus when red phosphorus is sublimed in vacuum in the presence of iodine . [ 21 ] It is structurally similar to violet phosphorus. However, in fibrous red phosphorus, phosphorus chains lie parallel instead of orthogonal, unlike violet phosphorus. Such a structure is depicted below: [ 22 ] [ 7 ] Fibrous red phosphorus, similar to red phosphorus, displays activity as a photocatalyst. [ 21 ] [ 23 ]
https://en.wikipedia.org/wiki/Red_phosphorus
Red plague is an accelerated corrosion of copper when plated with silver . After storage, damage or use in high- humidity environment, cuprous oxide forms on the surface of the parts. The corrosion is identifiable by presence of patches of brown-red powder deposit on the exposed copper. [ 1 ] Red plague is caused by normally occurring electrode potential difference between the copper and silver, leading to galvanic corrosion occurring in pits or breaks in the silver plating. It develops in the presence of moisture and oxygen when the porosity of the silver layer allows them to come in contact with the copper-silver interface. It is an electrochemical corrosion—a copper-silver galvanic cell forms and the copper acts as sacrificial anode . In suitable conditions, the corrosion can proceed rather quickly and lead to total circuit failure. More details can be seen in ESA document PSS-01-720, [ 2 ] with details on determining the susceptibility of silver-plated copper wire to red plague corrosion found in ECSS-Q-ST-70-20C. [ 3 ] It is not to be confused with purple plague , a type of galvanic corrosion that occurs between gold and aluminum. [ 4 ] This corrosion -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Red_plague_(corrosion)
Red rot is a degradation process found in vegetable-tanned leather . [ 1 ] Red rot is caused by prolonged storage or exposure to high relative humidity , environmental pollution, and high temperature. In particular, red rot occurs at pH values of 4.2 to 4.5. Sulfur dioxide converts to sulfurous acid which forms hydrogen peroxide . The peroxide combines with residual tannins in the leather to oxidize proteins, creating ammonium sulfate and ammonium bisulfate . [ 2 ] Red rot is also caused by problems in the tanning or in the bookbinding. In the tanning examples are: sulfuric acid residue, use of contaminated water and incomplete tanning. The bookbinding process can cause red rot when acids and bases are used when coloring the leather. [ 3 ] The decay manifests as a characteristic powdering of the leather's surface, along with structural weakness through loss, delamination, and a felt-like consistency. The damage caused by red rot is irreversible. However, its spread, if caused by environmental factors, may be retarded by an application of a consolidant (such as Klucel G ) coated with a sealer (such as Renaissance Wax ). [ 4 ] The progress of red rot can be stopped or slowed with a treatment of aluminium alkoxide solution, which increases the pH value and becomes (in the presence of water) a buffering inorganic aluminium salt in the leather. [ 5 ]
https://en.wikipedia.org/wiki/Red_rot
In quantum mechanics , the Redfield equation is a Markovian master equation that describes the time evolution of the reduced density matrix ρ of a strongly coupled quantum system that is weakly coupled to an environment. The equation is named after Alfred G. Redfield , who first applied it, doing so for nuclear magnetic resonance spectroscopy. [ 1 ] It is also known as the Redfield relaxation theory . [ 2 ] There is a close connection to the Lindblad master equation . If a so-called secular approximation is performed, where only certain resonant interactions with the environment are retained, every Redfield equation transforms into a master equation of Lindblad type. Redfield equations are trace-preserving and correctly produce a thermalized state for asymptotic propagation. However, in contrast to Lindblad equations, Redfield equations do not guarantee a positive time evolution of the density matrix. That is, it is possible to get negative populations during the time evolution. The Redfield equation approaches the correct dynamics for sufficiently weak coupling to the environment. The general form of the Redfield equation is ∂ ∂ t ρ ( t ) = − i ℏ [ H , ρ ( t ) ] − 1 ℏ 2 ∑ m [ S m , ( Λ m ρ ( t ) − ρ ( t ) Λ m † ) ] {\displaystyle {\frac {\partial }{\partial t}}\rho (t)=-{\frac {i}{\hbar }}[H,\rho (t)]-{\frac {1}{\hbar ^{2}}}\sum _{m}[S_{m},(\Lambda _{m}\rho (t)-\rho (t)\Lambda _{m}^{\dagger })]} where H {\displaystyle H} is the hermitian Hamiltonian, and the S m , Λ m {\displaystyle S_{m},\Lambda _{m}} are operators that describe the coupling to the environment, and [ A , B ] = A B − B A {\displaystyle [A,B]=AB-BA} is the commutation bracket. The explicit form is given in the derivation below. Consider a quantum system coupled to an environment with a total Hamiltonian of H tot = H + H int + H env {\displaystyle H_{\text{tot}}=H+H_{\text{int}}+H_{\text{env}}} . Furthermore, we assume that the interaction Hamiltonian can be written as H int = ∑ n S n E n {\displaystyle H_{\text{int}}=\sum _{n}S_{n}E_{n}} , where the S n {\displaystyle S_{n}} act only on the system degrees of freedom, the E n {\displaystyle E_{n}} only on the environment degrees of freedom. The starting point of Redfield theory is the Nakajima–Zwanzig equation with P {\displaystyle {\mathcal {P}}} projecting on the equilibrium density operator of the environment and Q {\displaystyle {\mathcal {Q}}} treated up to second order. [ 3 ] An equivalent derivation starts with second-order perturbation theory in the interaction H int {\displaystyle H_{\text{int}}} . [ 4 ] In both cases, the resulting equation of motion for the density operator in the interaction picture (with H 0 , S = H + H env {\displaystyle H_{0,S}=H+H_{\text{env}}} ) is ∂ ∂ t ρ I ( t ) = − 1 ℏ 2 ∑ m , n ∫ t 0 t d t ′ ( C m n ( t − t ′ ) [ S m , I ( t ) , S n , I ( t ′ ) ρ I ( t ′ ) ] − C m n ∗ ( t − t ′ ) [ S m , I ( t ) , ρ I ( t ′ ) S n , I ( t ′ ) ] ) {\displaystyle {\frac {\partial }{\partial t}}\rho _{\rm {I}}(t)=-{\frac {1}{\hbar ^{2}}}\sum _{m,n}\int _{t_{0}}^{t}dt'{\biggl (}C_{mn}(t-t'){\Bigl [}S_{m,\mathrm {I} }(t),S_{n,\mathrm {I} }(t')\rho _{\rm {I}}(t'){\Bigr ]}-C_{mn}^{\ast }(t-t'){\Bigl [}S_{m,\mathrm {I} }(t),\rho _{\rm {I}}(t')S_{n,\mathrm {I} }(t'){\Bigr ]}{\biggr )}} Here, t 0 {\displaystyle t_{0}} is some initial time, where the total state of the system and bath is assumed to be factorized, and we have introduced the bath correlation function C m n ( t ) = tr ( E m , I ( t ) E n ρ env,eq ) {\displaystyle C_{mn}(t)={\text{tr}}(E_{m,\mathrm {I} }(t)E_{n}\rho _{\text{env,eq}})} in terms of the density operator of the environment in thermal equilibrium , ρ env,eq {\displaystyle \rho _{\text{env,eq}}} . This equation is non-local in time: To get the derivative of the reduced density operator at time t, we need its values at all past times. As such, it cannot be easily solved. To construct an approximate solution, note that there are two time scales: a typical relaxation time τ r {\displaystyle \tau _{r}} that gives the time scale on which the environment affects the system time evolution, and the coherence time of the environment, τ c {\displaystyle \tau _{c}} that gives the typical time scale on which the correlation functions decay. If the relation τ c ≪ τ r {\displaystyle \tau _{c}\ll \tau _{r}} holds, then the integrand becomes approximately zero before the interaction-picture density operator changes significantly. In this case, the so-called Markov approximation ρ I ( t ′ ) ≈ ρ I ( t ) {\displaystyle \rho _{\rm {I}}(t')\approx \rho _{\rm {I}}(t)} holds. If we also move t 0 → − ∞ {\displaystyle t_{0}\to -\infty } and change the integration variable t ′ → τ = t − t ′ {\displaystyle t'\to \tau =t-t'} , we end up with the Redfield master equation ∂ ∂ t ρ I ( t ) = − 1 ℏ 2 ∑ m , n ∫ 0 ∞ d τ ( C m n ( τ ) [ S m , I ( t ) , S n , I ( t − τ ) ρ I ( t ) ] − C m n ∗ ( τ ) [ S m , I ( t ) , ρ I ( t ) S n , I ( t − τ ) ] ) {\displaystyle {\frac {\partial }{\partial t}}\rho _{\rm {I}}(t)=-{\frac {1}{\hbar ^{2}}}\sum _{m,n}\int _{0}^{\infty }d\tau {\biggl (}C_{mn}(\tau ){\Bigl [}S_{m,\mathrm {I} }(t),S_{n,\mathrm {I} }(t-\tau )\rho _{\rm {I}}(t){\Bigr ]}-C_{mn}^{\ast }(\tau ){\Bigl [}S_{m,\mathrm {I} }(t),\rho _{\rm {I}}(t)S_{n,\mathrm {I} }(t-\tau ){\Bigr ]}{\biggr )}} We can simplify this equation considerably if we use the shortcut Λ m = ∑ n ∫ 0 ∞ d τ C m n ( τ ) S n , I ( t − τ ) {\displaystyle \Lambda _{m}=\sum _{n}\int _{0}^{\infty }d\tau C_{mn}(\tau )S_{n,\mathrm {I} }(t-\tau )} . In the Schrödinger picture , the equation then reads ∂ ∂ t ρ ( t ) = − i ℏ [ H , ρ ( t ) ] − 1 ℏ 2 ∑ m [ S m , Λ m ρ ( t ) − ρ ( t ) Λ m † ] {\displaystyle {\frac {\partial }{\partial t}}\rho (t)=-{\frac {i}{\hbar }}[H,\rho (t)]-{\frac {1}{\hbar ^{2}}}\sum _{m}[S_{m},\Lambda _{m}\rho (t)-\rho (t)\Lambda _{m}^{\dagger }]} Secular ( Latin : saeculum , lit. 'century') approximation is an approximation valid for long times t {\displaystyle t} . The time evolution of the Redfield relaxation tensor is neglected as the Redfield equation describes weak coupling to the environment. Therefore, it is assumed that the relaxation tensor changes slowly in time, and it can be assumed constant for the duration of the interaction described by the interaction Hamiltonian . In general, the time evolution of the reduced density matrix can be written for the element a b {\displaystyle ab} as where R {\displaystyle {\mathcal {R}}} is the time-independent Redfield relaxation tensor. Given that the actual coupling to the environment is weak (but non-negligible), the Redfield tensor is a small perturbation of the system Hamiltonian and the solution can be written as ρ a b ( t ) = e − i ω a b t ρ a b , I ( t ) {\displaystyle \rho _{ab}(t)=e^{-i\omega _{ab}t}{\rho }_{ab,\mathrm {I} }(t)} where ρ I ( t ) {\displaystyle \rho _{\rm {I}}(t)} is not constant but slowly changing amplitude reflecting the weak coupling to the environment. This is also a form of the interaction picture , hence the index "I". [ note 1 ] Taking a derivative of the ρ I ( t ) {\displaystyle \rho _{\rm {I}}(t)} and substituting the equation ( 1 ) for ∂ ∂ t ρ a b ( t ) {\displaystyle {\frac {\partial }{\partial t}}\rho _{ab}(t)} , we are left with only the relaxation part of the equation ∂ ∂ t ρ a b , I ( t ) = − ∑ c d R a b c d e i ω a b t − i ω c d t ρ c d , I ( t ) {\displaystyle {\frac {\partial }{\partial t}}\rho _{ab,\mathrm {I} }(t)=-\sum _{cd}{\mathcal {R_{abcd}}}e^{i\omega _{ab}t-i\omega _{cd}t}\rho _{cd,\mathrm {I} }(t)} . We can integrate this equation on condition that the interaction picture of the reduced density matrix ρ I ( t ) {\displaystyle \rho _{\rm {I}}(t)} changes slowly in time (which is true if R {\displaystyle {\mathcal {R}}} is small), then ρ a b , I ( t ) ≈ ρ a b , I ( 0 ) {\displaystyle \rho _{ab,\mathrm {I} }(t)\approx \rho _{ab,\mathrm {I} }(0)} , getting ρ a b , I ( t ) = ρ a b , I ( 0 ) − ∑ c d ∫ 0 t d τ R a b c d e i ω a b τ − i ω c d τ ρ c d , I ( t ) = ρ a b , I ( 0 ) − ∑ c d R a b c d ( e i Δ ω t − 1 ) i Δ ω ρ c d , I ( t ) {\displaystyle \rho _{ab,\mathrm {I} }(t)=\rho _{ab,\mathrm {I} }(0)-\sum _{cd}\int _{0}^{t}d\tau {\mathcal {R_{abcd}}}e^{i\omega _{ab}\tau -i\omega _{cd}\tau }\rho _{cd,\mathrm {I} }(t)=\rho _{ab,\mathrm {I} }(0)-\sum _{cd}{\mathcal {R_{abcd}}}{\frac {(e^{i\Delta \omega t}-1)}{i\Delta \omega }}\rho _{cd,\mathrm {I} }(t)} where Δ ω = ω a b − ω c d {\displaystyle \Delta \omega =\omega _{ab}-\omega _{cd}} . In the limit of Δ ω {\displaystyle \Delta \omega } approaching zero, the fraction ( e i Δ ω t − 1 ) i Δ ω {\displaystyle {\frac {(e^{i\Delta \omega t}-1)}{i\Delta \omega }}} approaches t {\displaystyle t} , therefore the contribution of one element of the reduced density matrix to another element is proportional to time (and therefore dominates for long times t {\displaystyle t} ). In case Δ ω {\displaystyle \Delta \omega } is not approaching zero, the contribution of one element of the reduced density matrix to another oscillates with an amplitude proportional to 1 Δ ω {\displaystyle {\frac {1}{\Delta \omega }}} (and therefore is negligible for long times t {\displaystyle t} ). It is therefore appropriate to neglect any contribution from non-diagonal elements ( c d {\displaystyle cd} ) to other non-diagonal elements ( a b {\displaystyle ab} ) and from a non-diagonal elements ( c d {\displaystyle cd} ) to diagonal elements ( a a {\displaystyle aa} , a = b {\displaystyle a=b} ), since the only case when frequencies of different modes are equal is the case of random degeneracy . The only elements left in the Redfield tensor to evaluate after the Secular approximation are therefore:
https://en.wikipedia.org/wiki/Redfield_equation
The Redfield ratio or Redfield stoichiometry is the consistent atomic ratio of carbon , nitrogen and phosphorus found in marine phytoplankton and throughout the deep oceans. The term is named for American oceanographer Alfred C. Redfield who in 1934 first described the relatively consistent ratio of nutrients in marine biomass samples collected across several voyages on board the research vessel Atlantis , and empirically found the ratio to be C:N:P = 106:16:1. [ 1 ] While deviations from the canonical 106:16:1 ratio have been found depending on phytoplankton species and the study area, the Redfield ratio has remained an important reference to oceanographers studying nutrient limitation. A 2014 paper summarizing a large data set of nutrient measurements across all major ocean regions spanning from 1970 to 2010 reported the global median C:N:P to be 163:22:1. [ 2 ] For his 1934 paper, Alfred Redfield analyzed nitrate and phosphate data for the Atlantic , Indian , Pacific oceans and Barents Sea . [ 1 ] As a Harvard physiologist , Redfield participated in several voyages on board the research vessel Atlantis , analyzing data for C, N, and P content in marine plankton, and referenced data collected by other researchers as early as 1898. Redfield’s analysis of the empirical data led to him to discover that across and within the three oceans and Barents Sea, seawater had an N:P atomic ratio near 20:1 (later corrected to 16:1), and was very similar to the average N:P of phytoplankton. To explain this phenomenon, Redfield initially proposed two mutually non-exclusive mechanisms: I) The N:P in plankton tends towards the N:P composition of seawater. Specifically, phytoplankton species with different N and P requirements compete within the same medium and come to reflect the nutrient composition of the seawater. [ 1 ] II) An equilibrium between seawater and planktonic nutrient pools is maintained through biotic feedback mechanisms. [ 1 ] [ 3 ] Redfield proposed a thermostat like scenario in which the activities of nitrogen fixers and denitrifiers keep the nitrate to phosphate ratio in the seawater near the requirements in the protoplasm. [ 4 ] Considering that at the time little was known about the composition of "protoplasm", or the bulk composition of phytoplankton, Redfield did not attempt to explain why its N:P ratio should be approximately 16:1. In 1958, almost a quarter century after first discovering the ratios, Redfield leaned toward the latter mechanism in his manuscript, The Biological Control of Chemical Factors in the Environment. [ 3 ] Redfield proposed that the ratio of nitrogen to phosphorus in plankton resulted in the global ocean having a remarkably similar ratio of dissolved nitrate to phosphate (16:1). He considered how the cycles of not just N and P but also C and O could interact to result in this match. Redfield discovered the remarkable congruence between the chemistry of the deep ocean and the chemistry of living things such as phytoplankton in the surface ocean. Both have N:P ratios of about 16:1 in terms of atoms. When nutrients are not limiting , the molar elemental ratio C:N:P in most phytoplankton is 106:16:1. Redfield thought it wasn't purely coincidental that the vast oceans would have a chemistry perfectly suited to the requirements of living organisms. Laboratory experiments under controlled chemical conditions have found that phytoplankton biomass will conform to the Redfield ratio even when environmental nutrient levels exceed them, suggesting that ecological adaptation to oceanic nutrient ratios is not the only governing mechanism (contrary to one of the mechanisms initially proposed by Redfield). [ 5 ] However, subsequent modeling of feedback mechanisms, specifically nitrate-phosphorus coupling fluxes, do support his proposed mechanism of biotic feedback equilibrium, though these results are confounded by limitations in our current understanding of nutrient fluxes. [ 6 ] In the ocean, a large portion of the biomass is found to be nitrogen-rich plankton. Many of these plankton are consumed by other plankton biomass which have similar chemical compositions. This results in a similar N:P ratio, on average, for all the plankton throughout the world’s oceans, empirically found to average approximately 16:1. When these organisms sink into the ocean interior, their biomass is consumed by bacteria that, in aerobic conditions, oxidize the organic matter to form dissolved inorganic nutrients, mainly carbon dioxide , nitrate, and phosphate. That the nitrate to phosphate ratio in the interior of all of the major ocean basins is highly similar is possibly due to the residence times of these elements in the ocean relative to the ocean's circulation time, roughly 100 000 years for phosphorus and 2000 years for nitrogen. [ 7 ] The fact that the residence times of these elements are greater than the mixing times of the oceans (~ 1000 years) [ 8 ] can result in the ratio of nitrate to phosphate in the ocean interior remaining fairly uniform. It has been shown that phytoplankton play a key role in helping maintain this ratio. As organic matter sinks both nitrate and phosphate are released into the ocean via remineralization. Microorganisms preferentially consume oxygen in nitrate over phosphate leading to deeper oceanic waters having an N:P ratio of less than 16:1. From there, the ocean's currents upwell the nutrients to the surface where phytoplankton will consume the excess Phosphorus and maintain a N:P ratio of 16:1 by consuming N 2 via nitrogen fixation. [ 9 ] While such arguments can potentially explain why the ratios are fairly constant, they do not address the question why the N:P ratio is nearly 16 and not some other number. The research that resulted in this ratio has become a fundamental feature in the understanding of the biogeochemical cycles of the oceans, and one of the key tenets of biogeochemistry. The Redfield ratio is instrumental in estimating carbon and nutrient fluxes in global circulation models . They also help in determining which nutrients are limiting in a localized system, if there is a limiting nutrient. The ratio can also be used to understand the formation of phytoplankton blooms and subsequently hypoxia by comparing the ratio between different regions, such as a comparison of the Redfield Ratio of the Mississippi River to the ratio of the northern Gulf of Mexico. [ 10 ] Controlling N:P could be a means for sustainable reservoir management. [ 11 ] It may even be the case that the Redfield Ratio is applicable to terrestrial plants, soils, and soil microbial biomass, which would inform about limiting resources in terrestrial ecosystems. [ 12 ] In a study from 2007, soil and microbial biomass were found to have a consistent C:N:P ratios of 186:13:1 and 60:7:1, respectively on average at a global scale. [ 12 ] The Redfield ratio was initially derived empirically from measurements of the elemental composition of plankton in addition to the nitrate and phosphate content of seawater collected from a few stations in the Atlantic Ocean . This was later supported by hundreds of independent measurements of dissolved nitrate and phosphate. However, the composition of individual species of phytoplankton grown under nitrogen or phosphorus limitation shows that this N:P ratio can vary anywhere from 6:1 to 60:1. While understanding this problem, Redfield never attempted to explain it with the exception of noting that the N:P ratio of inorganic nutrients in the ocean interior was an average with small scale variability to be expected. Although the Redfield ratio is remarkably stable in the deep ocean, it has been widely shown that phytoplankton may have large variations in the C:N:P composition, and their life strategy plays a role in the C:N:P ratio. This variability has made some researchers speculate that the Redfield ratio perhaps is a general average in the modern ocean rather than a fundamental feature of phytoplankton, [ 13 ] though it has also been argued that it is related to a homeostatic protein-to- rRNA ratio fundamentally present in both prokaryotes and eukaryotes, which contributes to it being the most common composition. [ 14 ] There are several possible explanations for the observed variability in C:N:P ratios. The speed at which the cell grows has an influence on cell composition and thereby its stoichiometry. [ 15 ] Also, when phosphorus is scarce, phytoplankton communities can lower their P content, raising the N:P. [ 16 ] Additionally, the accumulation and quantity of dead phytoplankton and detritus can affect the availability of certain food sources which in turn affects the composition of the cell. [ 17 ] In some ecosystems, the Redfield ratio has also been shown to vary significantly by the dominant phytoplankton taxa present in an ecosystem, even in systems with abundant nutrients. Consequently, the system-specific Redfield ratio could serve as a proxy for plankton community structure. [ 18 ] Despite reports that the elemental composition of organisms such as marine phytoplankton in an oceanic region do not conform to the canonical Redfield ratio, the fundamental concept of this ratio remains valid and useful. Some feel that there are other elements, such as potassium , sulfur , zinc , copper , and iron which are also important in the ocean chemistry . [ 19 ] In particular, iron (Fe) was considered of great importance as early biological oceanographers hypothesized that iron may also be a limiting factor for primary production in the ocean. [ 20 ] Since then experimentation has proven that Iron is a limiting factor for primary production. Iron-rich solution was added to 64 km 2 area which led to an increase in phytoplankton primary production. [ 21 ] As a result an extended Redfield ratio was developed to include this as part of this balance. This new stoichiometric ratio states that the ratio should be 106 C:16 N:1 P:0.1-0.001 Fe. The large variation for Fe is a result of the significant obstacle of ships and scientific equipment contaminating any samples collected at sea with excess Fe. [ 22 ] It was this contamination that resulted in early evidence suggesting that iron concentrations were high and not a limiting factor in marine primary production. Diatoms need, among other nutrients, silicic acid to create biogenic silica for their frustules (cell walls). As a result of this, the Redfield-Brzezinski nutrient ratio was proposed for diatoms and stated to be C:Si:N:P = 106:15:16:1. [ 23 ] Extending beyond primary production itself, the oxygen consumed by aerobic respiration of phytoplankton biomass has also been shown to follow a predictable proportion to other elements. The O 2 :C ratio has been measured at 138:106. [ 6 ]
https://en.wikipedia.org/wiki/Redfield_ratio
In mathematics, a Redheffer matrix , often denoted A n {\displaystyle A_{n}} as studied by Redheffer (1977) , is a square (0,1) matrix whose entries a ij are 1 if i divides j or if j = 1; otherwise, a ij = 0. It is useful in some contexts to express Dirichlet convolution , or convolved divisors sums , in terms of matrix products involving the transpose of the n t h {\displaystyle n^{th}} Redheffer matrix. Since the invertibility of the Redheffer matrices are complicated by the initial column of ones in the matrix, it is often convenient to express A n := C n + D n {\displaystyle A_{n}:=C_{n}+D_{n}} where C n := [ c i j ] {\displaystyle C_{n}:=[c_{ij}]} is defined to be the (0,1) matrix whose entries are one if and only if j = 1 {\displaystyle j=1} and i ≠ 1 {\displaystyle i\neq 1} . The remaining one-valued entries in A n {\displaystyle A_{n}} then correspond to the divisibility condition reflected by the matrix D n {\displaystyle D_{n}} , which plainly can be seen by an application of Mobius inversion is always invertible with inverse D n − 1 = [ μ ( j / i ) M i ( j ) ] {\displaystyle D_{n}^{-1}=\left[\mu (j/i)M_{i}(j)\right]} . We then have a characterization of the singularity of A n {\displaystyle A_{n}} expressed by det ( A n ) = det ( D n − 1 C n + I n ) . {\displaystyle \det \left(A_{n}\right)=\det \left(D_{n}^{-1}C_{n}+I_{n}\right).} If we define the function then we can define the n t h {\displaystyle n^{th}} Redheffer (transpose) matrix to be the n x n square matrix R n = [ M j ( i ) ] 1 ≤ i , j ≤ n {\displaystyle R_{n}=[M_{j}(i)]_{1\leq i,j\leq n}} in usual matrix notation. We will continue to make use this notation throughout the next sections. The matrix below is the 12 × 12 Redheffer matrix. In the split sum-of-matrices notation for A 12 := C 12 + D 12 {\displaystyle A_{12}:=C_{12}+D_{12}} , the entries below corresponding to the initial column of ones in C n {\displaystyle C_{n}} are marked in blue. A corresponding application of the Mobius inversion formula shows that the n t h {\displaystyle n^{th}} Redheffer transpose matrix is always invertible , with inverse entries given by where μ ( n ) {\displaystyle \mu (n)} denotes the Moebius function . In this case, we have that the 12 × 12 {\displaystyle 12\times 12} inverse Redheffer transpose matrix is given by The determinant of the n × n square Redheffer matrix is given by the Mertens function M ( n ). In particular, the matrix A n {\displaystyle A_{n}} is not invertible precisely when the Mertens function is zero (or is close to changing signs). As a corollary of the disproof [ 1 ] of the Mertens conjecture , it follows that the Mertens function changes sign, and is therefore zero, infinitely many times, so the Redheffer matrix A n {\displaystyle A_{n}} is singular at infinitely many natural numbers. The determinants of the Redheffer matrices are immediately tied to the Riemann Hypothesis through this relation with the Mertens function, since the Hypothesis is equivalent to showing that M ( x ) = O ( x 1 / 2 + ε ) {\displaystyle M(x)=O\left(x^{1/2+\varepsilon }\right)} for all (sufficiently small) ε > 0 {\displaystyle \varepsilon >0} . In a somewhat unconventional construction which reinterprets the (0,1) matrix entries to denote inclusion in some increasing sequence of indexing sets, we can see that these matrices are also related to factorizations of Lambert series . This observation is offered in so much as for a fixed arithmetic function f , the coefficients of the next Lambert series expansion over f provide a so-called inclusion mask for the indices over which we sum f to arrive at the series coefficients of these expansions. Notably, observe that Now in the special case of these divisor sums, which we can see from the above expansion, are codified by boolean (zero-one) valued inclusion in the sets of divisors of a natural number n , it is possible to re-interpret the Lambert series generating functions which enumerate these sums via yet another matrix-based construction. Namely, Merca and Schmidt (2017-2018) proved invertible matrix factorizations expanding these generating functions in the form of [ 2 ] where ( q ; q ) ∞ {\displaystyle (q;q)_{\infty }} denotes the infinite q-Pochhammer symbol and where the lower triangular matrix sequence is exactly generated as the coefficients of s n , k = [ q n ] q k 1 − q k ( q ; q ) ∞ {\displaystyle s_{n,k}=[q^{n}]{\frac {q^{k}}{1-q^{k}}}(q;q)_{\infty }} , through these terms also have interpretations as differences of special even (odd) indexed partition functions. Merca and Schmidt (2017) also proved a simple inversion formula which allows the implicit function f to be expressed as a sum over the convolved coefficients ℓ ( n ) = ( f ∗ 1 ) ( n ) {\displaystyle \ell (n)=(f\ast 1)(n)} of the original Lambert series generating function in the form of [ 3 ] where p(n) denotes the partition function , μ ( n ) {\displaystyle \mu (n)} is the Moebius function , and the coefficients of ( q ; q ) ∞ {\displaystyle (q;q)_{\infty }} inherit a quadratic dependence on j through the pentagonal number theorem . This inversion formula is compared to the inverses (when they exist) of the Redheffer matrices A n {\displaystyle A_{n}} for the sake of completion here. Other than that the underlying so-termed mask matrix which specifies the inclusion of indices in the divisor sums at hand are invertible, utilizing this type of construction to expand other Redheffer-like matrices for other special number theoretic sums need not be limited to those forms classically studied here. For example, in 2018 Mousavi and Schmidt extend such matrix based factorization lemmas to the cases of Anderson-Apostol divisor sums (of which Ramanujan sums are a notable special case) and sums indexed over the integers that are relatively prime to each n (for example, as classically defines the tally denoted by the Euler phi function ). [ 4 ] More to the point, the examples considered in the applications section below suggest a study of the properties of what can be considered generalized Redheffer matrices representing other special number theoretic sums. which bounds the asymptotic behavior of the spectrum of A n {\displaystyle A_{n}} when n is large. It can also be shown that 1 + n − 1 ≤ ρ n < n + O ( log ⁡ n ) {\displaystyle 1+{\sqrt {n-1}}\leq \rho _{n}<{\sqrt {n}}+O(\log n)} , and by a careful analysis (see the characteristic polynomial expansions below) that ρ n = n + log ⁡ n + O ( 1 ) {\displaystyle \rho _{n}={\sqrt {n}}+\log {\sqrt {n}}+O(1)} . We have that [ a 1 , a 2 , … , a n ] {\displaystyle [a_{1},a_{2},\ldots ,a_{n}]} is an eigenvector of A n T {\displaystyle A_{n}^{T}} corresponding to some eigenvalue λ ∈ σ ( A n ) {\displaystyle \lambda \in \sigma (A_{n})} in the spectrum of A n {\displaystyle A_{n}} if and only if for n ≥ 2 {\displaystyle n\geq 2} the following two conditions hold: If we restrict ourselves to the so-called non-trivial cases where λ ≠ 1 {\displaystyle \lambda \neq 1} , then given any initial eigenvector component a 1 {\displaystyle a_{1}} we can recursively compute the remaining n-1 components according to the formula With this in mind, for λ ≠ 1 {\displaystyle \lambda \neq 1} we can define the sequences of There are a couple of curious implications related to the definitions of these sequences. First, we have that λ ∈ σ ( A n ) {\displaystyle \lambda \in \sigma (A_{n})} if and only if Secondly, we have an established formula for the Dirichlet series , or Dirichlet generating function , over these sequences for fixed λ ≠ 1 {\displaystyle \lambda \neq 1} which holds for all ℜ ( s ) > 1 {\displaystyle \Re (s)>1} given by where ζ ( s ) {\displaystyle \zeta (s)} of course as usual denotes the Riemann zeta function . A graph theoretic interpretation to evaluating the zeros of the characteristic polynomial of A n {\displaystyle A_{n}} and bounding its coefficients is given in Section 5.1 of. [ 5 ] Estimates of the sizes of the Jordan blocks of A n {\displaystyle A_{n}} corresponding to the eigenvalue one are given in. [ 6 ] A brief overview of the properties of a modified approach to factorizing the characteristic polynomial, p A n ( x ) {\displaystyle p_{A_{n}}(x)} , of these matrices is defined here without the full scope of the somewhat technical proofs justifying the bounds from the references cited above. Namely, let the shorthand s := ⌊ log 2 ⁡ ( n ) ⌋ {\displaystyle s:=\lfloor \log _{2}(n)\rfloor } and define a sequence of auxiliary polynomial expansions according to the formula Then we know that f n ( t ) {\displaystyle f_{n}(t)} has two real roots, denoted by t n ± {\displaystyle t_{n}^{\pm }} , which satisfy where γ ≈ 0.577216 {\displaystyle \gamma \approx 0.577216} is Euler's classical gamma constant , and where the remaining coefficients of these polynomials are bounded by A plot of the much more size-constrained nature of the eigenvalues of f n ( t ) {\displaystyle f_{n}(t)} which are not characterized by these two dominant zeros of the polynomial seems to be remarkable as evidenced by the only 20 remaining complex zeros shown below. The next image is reproduced from a freely available article cited above when n ∼ 10 6 {\displaystyle n\sim 10^{6}} is available here for reference. We provide a few examples of the utility of the Redheffer matrices interpreted as a (0,1) matrix whose parity corresponds to inclusion in an increasing sequence of index sets. These examples should serve to freshen up some of the at times dated historical perspective of these matrices, and their being footnote-worthy by virtue of an inherent, and deep, relation of their determinants to the Mertens function and equivalent statements of the Riemann Hypothesis . This interpretation is a great deal more combinatorial in construction than typical treatments of the special Redheffer matrix determinants. Nonetheless, this combinatorial twist on enumerating special sequences of sums has been explored more recently in a number of papers and is a topic of active interest in pre-print archives. Before diving into the full construction of this spin on the Redheffer matrix variants R n {\displaystyle R_{n}} defined above, observe that this type of expansion is in many ways essentially just another variation of the usage of a Toeplitz matrix to represent truncated power series expressions where the matrix entries are coefficients of the formal variable in the series. Let's explore an application of this particular view of a (0,1) matrix as masking inclusion of summation indices in a finite sum over some fixed function. See the citations to the references [ 7 ] and [ 8 ] for existing generalizations of the Redheffer matrices in the context of general arithmetic function cases. The inverse matrix terms are referred to a generalized Mobius function within the context of sums of this type in. [ 9 ] First, given any two non-identically-zero arithmetic functions f and g , we can provide explicit matrix representations which encode their Dirichlet convolution in rows indexed by natural numbers n ≥ 1 , 1 ≤ n ≤ x {\displaystyle n\geq 1,1\leq n\leq x} : Then letting e T := [ 1 , 1 , … , 1 ] {\displaystyle e^{T}:=[1,1,\ldots ,1]} denote the vector of all ones, it is easily seen that the n t h {\displaystyle n^{th}} row of the matrix-vector product e T ⋅ D f , g ( x ) {\displaystyle e^{T}\cdot D_{f,g}(x)} gives the convolved Dirichlet sums for all 1 ≤ n ≤ x {\displaystyle 1\leq n\leq x} where the upper index x ≥ 2 {\displaystyle x\geq 2} is arbitrary. One task that is particularly onerous given an arbitrary function f is to determine its Dirichlet inverse exactly without resorting to a standard recursive definition of this function via yet another convolved divisor sum involving the same function f with its under-specified inverse to be determined: It is clear that in general the Dirichlet inverse f − 1 ( n ) {\displaystyle f^{-1}(n)} for f , i.e., the uniquely defined arithmetic function such that ( f − 1 ∗ f ) ( n ) = δ n , 1 {\displaystyle (f^{-1}\ast f)(n)=\delta _{n,1}} , involves sums of nested divisor sums of depth from one to ω ( n ) {\displaystyle \omega (n)} where this upper bound is the prime omega function which counts the number of distinct prime factors of n . As this example shows, we can formulate an alternate way to construct the Dirichlet inverse function values via matrix inversion with our variant Redheffer matrices, R n {\displaystyle R_{n}} . There are several often cited articles from worthy journals that fight to establish expansions of number theoretic divisor sums, convolutions, and Dirichlet series (to name a few) through matrix representations. Besides non-trivial estimates on the corresponding spectrum and eigenspaces associated with truly notable and important applications of these representations—the underlying machinery in representing sums of these forms by matrix products is to effectively define a so-termed masking matrix whose zero-or-one valued entries denote inclusion in an increasing sequence of sets of the natural numbers { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} . To illustrate that the previous mouthful of jargon makes good sense in setting up a matrix based system for representing a wide range of special summations, consider the following construction: Let A n ⊆ [ 1 , n ] ∩ Z {\displaystyle {\mathcal {A}}_{n}\subseteq [1,n]\cap \mathbb {Z} } be a sequence of index sets, and for any fixed arithmetic function f : N ⟶ C {\displaystyle f:\mathbb {N} \longrightarrow \mathbb {C} } define the sums One of the classes of sums considered by Mousavi and Schmidt (2017) defines the relatively prime divisor sums by setting the index sets in the last definition to be This class of sums can be used to express important special arithmetic functions of number theoretic interest, including Euler's phi function (where classically we define m := 0 {\displaystyle m:=0} ) as and even the Mobius function through its representation as a discrete (finite) Fourier transform: Citations in the full paper provide other examples of this class of sums including applications to cyclotomic polynomials (and their logarithms). The referenced article by Mousavi and Schmidt (2017) develops a factorization-theorem-like treatment to expanding these sums which is an analog to the Lambert series factorization results given in the previous section above. The associated matrices and their inverses for this definition of the index sets A n {\displaystyle {\mathcal {A}}_{n}} then allow us to perform the analog of Moebius inversion for divisor sums which can be used to express the summand functions f as a quasi-convolved sum over the inverse matrix entries and the left-hand-side special functions, such as φ ( n ) {\displaystyle \varphi (n)} or μ ( n ) {\displaystyle \mu (n)} pointed out in the last pair of examples. These inverse matrices have many curious properties (and a good reference pulling together a summary of all of them is currently lacking) which are best intimated and conveyed to new readers by inspection. With this in mind, consider the case of the upper index x := 21 {\displaystyle x:=21} and the relevant matrices defined for this case given as follows: Examples of invertible matrices which define other special sums with non-standard, however, clear applications should be catalogued and listed in this generalizations section for completeness. An existing summary of inversion relations , and in particular, exact criteria under which sums of these forms can be inverted and related is found in many references on orthogonal polynomials . Other good examples of this type of factorization treatment to inverting relations between sums over sufficiently invertible, or well enough behaved triangular sets of weight coefficients include the Mobius inversion formula , the binomial transform , and the Stirling transform , among others.
https://en.wikipedia.org/wiki/Redheffer_matrix
Redi Award is an international science award given to scientists who have made significant contributions in toxinology , the scientific study of venoms , poisons and toxins . The award is sponsored by the International Society on Toxinology (ISI). [ 1 ] Osservazioni intorno alle vipere ( Observations about the Viper ) written by an Italian polymath Francesco Redi in 1664 is regarded as the milestone in the beginning of toxinology research. Redi was the first scientist to elucidate the scientific basis of snakebite and venom of the viper . [ 2 ] He showed for the first time that the viper venom comes from the fang , not the gallbladder as it was believed; that it is not poisonous when swallowed, and effective only when it enters the bloodstream . [ 3 ] He even demonstrated the possibility of slowing down the venom action in the blood by tight ligature before the wound. This work is heralded as the foundation of toxinology . [ 4 ] In honour of the pioneer, the International Society on Toxinology (ISI) instituted the Redi Award in 1967 in recognition of scientists for their significant contributions in toxinology research. [ 1 ] The IST awards scientists or clinicians who have made outstanding contributions to the field of toxinology. The award is given at each World Congress of the Society, which is generally held every three years. It is the highest award bestowed by the society and the most prestigious in the world for toxinologists. Selection for the award is made by the Redi Award Committee chaired by the editor of Toxicon (the official journal of IST) and accompanied by past and present Executive Officers of the society and former Redi awardees. The result is only announced at the World Congress. The recipient is then invited to present a lecture of his/her own choosing, officially called the Redi Lecture, to the congress. [ 3 ] [ 5 ] The Award consists of a framed citation describing the merits of the awardee and a financial assistance to help cover expenses associated with attendance at the meeting. [ 6 ] The inaugural award was presented in 1967. [ 7 ]
https://en.wikipedia.org/wiki/Redi_Award
In chemistry, redistribution usually refers to the exchange of anionic ligands bonded to metal and metalloid centers. The conversion does not involve redox , in contrast to disproportionation reactions. Some useful redistribution reactions are conducted at higher temperatures; upon cooling the mixture, the product mixture is kinetically frozen and the individual products can be separated. In cases where redistribution is rapid at mild temperatures, the reaction is less useful synthetically but still important mechanistically. Redistribution reactions are exhibited by methylboranes. Thus monomethyldiborane rapidly converts at room temperature to diborane and trimethylborane : [ 1 ] Useful redistribution reactions are found in organoaluminium , organoboron , and organosilicon chemistry . [ 2 ] [ 3 ] In another example, tetramethylsilane is an undesirable product of the industrially important direct process , but it can be converted (recycled) into more useful products by redistribution with silicon tetrachloride : In organotin chemistry , the mixed alkyl tin chlorides are produced by redistribution, a reaction called the Kocheshkov comproportionation: [ 4 ] Many metal halides undergo redistribution reactions, usually to afford nearly statistical mixtures of products. For example, titanium tetrachloride and titanium tetrabromide redistribute their halide ligands, one of many reactions in this conversion is shown: [ 5 ]
https://en.wikipedia.org/wiki/Redistribution_(chemistry)
In physics and thermodynamics , the Redlich–Kwong equation of state is an empirical, algebraic equation that relates temperature, pressure, and volume of gases. It is generally more accurate than the van der Waals equation and the ideal gas equation at temperatures above the critical temperature . It was formulated by Otto Redlich and Joseph Neng Shun Kwong in 1949. [ 1 ] [ 2 ] It showed that a two-parameter, cubic equation of state could well reflect reality in many situations, standing alongside the much more complicated Beattie–Bridgeman model and Benedict–Webb–Rubin equation that were used at the time. Although it was initially developed for gases, the Redlich–Kwong equation has been considered the most modified equation of state since those modifications have been aimed to generalize the predictive results obtained from it. [ 3 ] Although this equation is not currently employed in practical applications, [ 4 ] modifications derived from this mathematical model like the Soave Redlich-Kwong (SRK), and Peng Robinson have been improved and currently used in simulation and research of vapor–liquid equilibria . [ 3 ] [ 5 ] The Redlich–Kwong equation is formulated as: [ 6 ] [ 7 ] p = R T V m − b − a T V m ( V m + b ) , {\displaystyle p={\frac {R\,T}{V_{m}-b}}-{\frac {a}{{\sqrt {T}}\;V_{m}\,(V_{m}+b)}},} where: The constants are different depending on which gas is being analyzed. The constants can be calculated from the critical point data of the gas: [ 6 ] a = 1 9 ( 2 3 − 1 ) R 2 T c 2.5 P c = 0.42748 R 2 T c 2.5 P c , b = 2 3 − 1 3 R T c P c = 0.08664 R T c P c , {\displaystyle {\begin{aligned}a&={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{2.5}}{P_{c}}}=0.42748\,{\frac {R^{2}\,{T_{c}}^{2.5}}{P_{c}}},\\[1ex]b&={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{P_{c}}}=0.08664\,{\frac {R\,T_{c}}{P_{c}}},\end{aligned}}} where: The Redlich–Kwong equation can also be represented as an equation for the compressibility factor of gas, as a function of temperature and pressure: [ 8 ] Z = p V m R T = 1 1 − h − A 2 B h 1 + h {\displaystyle Z={\frac {p\,V_{m}}{R\,T}}={\frac {1}{1-h}}\ -{\frac {A^{2}}{B}}{\frac {h}{1+h}}} where: Or more simply: Z = p V m R T = V m V m − b − a R T 3 / 2 ( V m + b ) {\displaystyle Z={\frac {pV_{m}}{RT}}={\frac {V_{m}}{V_{m}-b}}-{\frac {a}{RT^{3/2}\left(V_{m}+b\right)}}} This equation only implicitly gives Z as a function of pressure and temperature, but is easily solved numerically, originally by graphical interpolation, and now more easily by computer. Moreover, analytic solutions to cubic functions have been known for centuries and are even faster for computers. The Redlich-Kwong equation of state may also be expressed as a cubic function of the molar volume. [ 7 ] For all Redlich–Kwong gases: Z c = 1 3 {\displaystyle Z_{c}={\frac {1}{3}}} where: Using p r = p / P c {\displaystyle p_{r}=p/P_{\text{c}}} , V r = V m / V m,c {\displaystyle V_{r}=V_{\text{m}}/V_{\text{m,c}}} , T r = T / T c {\displaystyle T_{r}=T/T_{\text{c}}} the equation of state can be written in the reduced form : p r = Z c − 1 T r V r − 0.08664 Z c − 1 − 0.42748 Z c − 2 T r V r ( V r + 0.08664 Z c − 1 ) {\displaystyle p_{r}={\frac {Z_{c}^{-1}T_{r}}{V_{r}-0.08664Z_{c}^{-1}}}-{\frac {0.42748Z_{c}^{-2}}{{\sqrt {T_{r}}}V_{r}\left(V_{r}+{0.08664}Z_{c}^{-1}\right)}}} And since Z c − 1 = 3 {\displaystyle Z_{c}^{-1}=3} it follows: p r = 3 T r V r − b ′ − 1 b ′ T r V r ( V r + b ′ ) {\displaystyle p_{r}={\frac {3T_{r}}{V_{r}-b'}}-{\frac {1}{b'{\sqrt {T_{r}}}V_{r}\left(V_{r}+b'\right)}}} with b ′ = 2 3 − 1 ≈ 0.26 {\displaystyle b'={\sqrt[{3}]{2}}-1\approx 0.26} From the Redlich–Kwong equation, the fugacity coefficient of a gas can be estimated: [ 8 ] ln ⁡ ϕ = ∫ 0 P Z − 1 p d P = Z − 1 − ln ⁡ ( Z − B P ) − A 2 B ln ⁡ ( 1 + B P Z ) {\displaystyle \ln \phi =\int _{0}^{P}{{\frac {Z-1}{p}}dP}=Z-1-\ln \left(Z-B\,P\right)-{\frac {A^{2}}{B}}\,\ln \left(1+{\frac {B\,P}{Z}}\right)} It is possible to express the critical constants T c and P c as functions of a and b by reversing the following system of 2 equations a ( T c , P c ) and b ( T c , P c ) with 2 variables T c , P c : a = 1 9 ( 2 3 − 1 ) R 2 T c 5 / 2 P c = 1 9 ( 2 3 − 1 ) R 2 T c 5 / 2 2 3 − 1 3 R T c b ⟹ a = b R T c 3 / 2 3 ( 2 3 − 1 ) 2 ⟹ T c = 3 2 / 3 ( 2 3 − 1 ) 4 / 3 ( a b R ) 2 / 3 {\displaystyle {\begin{aligned}a&={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{5/2}}{P_{c}}}={\frac {1}{9({\sqrt[{3}]{2}}-1)}}\,{\frac {R^{2}\,{T_{c}}^{5/2}}{{\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{b}}}}\\[1ex]\implies a&={\frac {bR\,{T_{c}}^{3/2}}{3{\left({\sqrt[{3}]{2}}-1\right)}^{2}}}\\[1ex]\implies T_{c}&=3^{2/3}{\left({\sqrt[{3}]{2}}-1\right)}^{4/3}{\left({\frac {a}{bR}}\right)}^{2/3}\end{aligned}}} b = 2 3 − 1 3 R T c P c ⟹ P c = 2 3 − 1 3 R T c b ⟹ P c = ( 2 3 − 1 ) 7 / 3 3 1 / 3 R 1 / 3 a 2 / 3 b 5 / 3 {\displaystyle {\begin{aligned}b&={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{P_{c}}}\\[1ex]\implies P_{c}&={\frac {{\sqrt[{3}]{2}}-1}{3}}\,{\frac {R\,T_{c}}{b}}\\[1ex]\implies P_{c}&={\frac {({\sqrt[{3}]{2}}-1)^{7/3}}{3^{1/3}}}R^{1/3}{\frac {a^{2/3}}{b^{5/3}}}\end{aligned}}} Because of the definition of compressibility factor at critical condition, it is possible to reverse it to find the critical molar volume V m,c , by knowing previous found P c , T c and Z c =1/3. Z = P V m R T ⟹ Z c = P c V m , c R T c ⟹ V m , c = Z c R T c P c {\displaystyle Z={\frac {PV_{m}}{RT}}\implies Z_{c}={\frac {P_{c}V_{m,c}}{RT_{c}}}\implies V_{m,c}=Z_{c}{\frac {RT_{c}}{P_{c}}}} V m , c = R 3 3 2 / 3 ( 2 3 − 1 ) 4 / 3 ( a b R ) 2 / 3 ( 2 3 − 1 ) 7 / 3 3 1 / 3 R 1 / 3 a 2 / 3 b 5 / 3 = R 3 3 b R ( 2 3 − 1 ) = b 2 3 − 1 {\displaystyle V_{m,c}={\frac {R}{3}}{\frac {3^{2/3}({\sqrt[{3}]{2}}-1)^{4/3}({\frac {a}{bR}})^{2/3}}{{\frac {({\sqrt[{3}]{2}}-1)^{7/3}}{3^{1/3}}}R^{1/3}{\frac {a^{2/3}}{b^{5/3}}}}}={\frac {R}{3}}{\frac {3b}{R({\sqrt[{3}]{2}}-1)}}={\frac {b}{{\sqrt[{3}]{2}}-1}}} The Redlich–Kwong equation was developed with an intent to also be applicable to mixtures of gases. In a mixture, the b term, representing the volume of the molecules, is an average of the b values of the components, weighted by the mole fractions: b = ∑ i ∑ j x i x j b i j , {\displaystyle b=\sum _{i}\sum _{j}x_{i}\,x_{j}b_{ij},} or B = ∑ i x i B i {\displaystyle B=\sum _{i}x_{i}\,B_{i}} where: The cross-terms of b ij (i.e. terms for which i ≠ j {\displaystyle i\neq j} ), are commonly computed as b i j = b i + b j 2 ( 1 − l i j ) , {\displaystyle b_{ij}={\frac {b_{i}+b_{j}}{2}}(1-l_{ij}),} where l i j {\displaystyle l_{ij}} is an often empirically fitted interaction parameter accounting for asymmetry in the cross interactions. [ 9 ] The constant representing the attractive forces, a , is not linear with respect to mole fraction, but rather depends on the square of the mole fractions. That is: a = ∑ i ∑ j x i x j a i j {\displaystyle a=\sum _{i}\sum _{j}x_{i}\,x_{j}\,a_{i\,j}} where: It is generally assumed that the attractive cross terms represent the geometric average of the individual a terms, adjusted using an interaction parameter k i j {\displaystyle k_{ij}} , that is: [ 9 ] a i j = ( a i a j ) 1 / 2 ( 1 − k i j ) , {\displaystyle a_{i\,j}=(a_{i}\,a_{j})^{1/2}(1-k_{ij}),} Where the interaction parameter k i j {\displaystyle k_{ij}} is an often empirically fitted parameter accounting for asymmetry in the molecular cross-interactions. [ 9 ] In this case, the following equation for the attractive term is furnished: A = ∑ i x i A i {\displaystyle A=\sum _{i}x_{i}\,A_{i}} where A i is the A term for the i' th component of the mixture. These manners of creating a and b parameters for a mixture from the parameters of the pure fluids are commonly known as the van der Waals one-fluid mixing and combining rules. [ 9 ] The Van der Waals equation , formulated in 1873 by Johannes Diderik van der Waals , is generally regarded as the first somewhat realistic equation of state (beyond the ideal gas law): p = R T V m − b − a V m 2 {\displaystyle p={\frac {RT}{V_{\mathrm {m} }-b}}-{\frac {a}{V_{\mathrm {m} }^{2}}}} However, its modeling of real behavior is not sufficient for many applications, and by 1949, had fallen out of favor, with the Beattie–Bridgeman and Benedict–Webb–Rubin equations of state being used preferentially, both of which contain more parameters than the Van der Waals equation. [ 10 ] The Redlich–Kwong equation was developed by Redlich and Kwong while they were both working for the Shell Development Company at Emeryville, California . Kwong had begun working at Shell in 1944, where he met Otto Redlich when he joined the group in 1945. The equation arose out of their work at Shell - they wanted an easy, algebraic way to relate the pressures, volumes, and temperatures of the gasses they were working with - mostly non-polar and slightly polar hydrocarbons (the Redlich–Kwong equation is less accurate for hydrogen-bonding gases). It was presented jointly in Portland, Oregon at the Symposium on Thermodynamics and Molecular Structure of Solutions in 1948, as part of the 14th Meeting of the American Chemical Society . [ 11 ] The success of the Redlich–Kwong equation in modeling many real gases accurately demonstrate that a cubic, two-parameter equation of state can give adequate results, if it is properly constructed. After they demonstrated the viability of such equations, many others created equations of similar form to try to improve on the results of Redlich and Kwong. The equation is essentially empirical – the derivation is neither direct nor rigorous. The Redlich–Kwong equation is very similar to the Van der Waals equation, with only a slight modification being made to the attractive term, giving that term a temperature dependence. At high pressures, the volume of all gases approaches some finite volume, largely independent of temperature, that is related to the size of the gas molecules. This volume is reflected in the b in the equation. It is empirically true that this volume is about 0.26 V c (where V c is the volume at the critical point). This approximation is quite good for many small, non-polar compounds – the value ranges between about 0.24 V c and 0.28V c . [ 12 ] In order for the equation to provide a good approximation of volume at high pressures, it had to be constructed such that b = 0.26 V c . {\displaystyle b=0.26\ V_{c}.} The first term in the equation represents this high-pressure behavior. The second term corrects for the attractive force of the molecules to each other. The functional form of a with respect to the critical temperature and pressure is empirically chosen to give the best fit at moderate pressures for most relatively non-polar gasses. [ 11 ] The values of a and b are completely determined by the equation's shape and cannot be empirically chosen. Requiring it to hold at its critical point P = P c {\displaystyle P=P_{c}} , V = V c {\displaystyle V=V_{c}} , P c = R T c V c − b − a T c V c ( V c + b ) , {\displaystyle P_{c}={\frac {R\,T_{c}}{V_{c}-b}}-{\frac {a}{{\sqrt {T_{c}}}\;V_{c}\,(V_{c}+b)}},} enforcing the thermodynamic criteria for a critical point, ( ∂ P ∂ V ) T = 0 , ( ∂ 2 P ∂ V 2 ) T = 0 , {\displaystyle \left({\frac {\partial P}{\partial V}}\right)_{T}=0,\left({\frac {\partial ^{2}P}{\partial V^{2}}}\right)_{T}=0,} and without loss of generality defining b = b ′ V c {\displaystyle b=b'V_{c}} and V c = Z c R T c / P c {\displaystyle V_{c}=Z_{c}RT_{c}/P_{c}} yields 3 constraints, a = ( 1 + b ′ ) 2 ( b ′ − 1 ) 2 ( 2 + b ′ ) R 2 T c 5 / 2 Z c P c a = ( 1 + b ′ ) 3 ( 1 − b ′ ) 3 ( 3 + 3 b ′ + b ′ 2 ) R 2 T c 5 / 2 Z c P c a = ( 1 + b ′ ) ( 1 − Z c + b ′ Z c ) b ′ − 1 R 2 T c 5 / 2 Z c P c . {\displaystyle {\begin{aligned}a&={\frac {(1+b')^{2}}{(b'-1)^{2}(2+b')}}{\frac {R^{2}T_{c}^{5/2}Z_{c}}{P_{c}}}\\[2pt]a&={\frac {(1+b')^{3}}{(1-b')^{3}(3+3b'+b'^{2})}}{\frac {R^{2}T_{c}^{5/2}Z_{c}}{P_{c}}}\\[2pt]a&={\frac {(1+b')(1-Z_{c}+b'Z_{c})}{b'-1}}{\frac {R^{2}T_{c}^{5/2}Z_{c}}{P_{c}}}.\end{aligned}}} Simultaneously solving these while requiring b' and Z c to be positive yields only one solution: Z c = 1 3 , b ′ = 2 3 − 1 , a = P c V c 2 T c b ′ = 1 2 3 − 1 R 2 T c 5 / 2 9 P c . {\displaystyle {\begin{aligned}Z_{c}&={\frac {1}{3}},\qquad \qquad b'={\sqrt[{3}]{2}}-1,\\[1ex]a&={\frac {P_{c}V_{c}^{2}{\sqrt {T_{c}}}}{b'}}={\frac {1}{{\sqrt[{3}]{2}}-1}}\,{\frac {R^{2}\,{T_{c}}^{5/2}}{9P_{c}}}.\end{aligned}}} The Redlich–Kwong equation was designed largely to predict the properties of small, non-polar molecules in the vapor phase, which it generally does well. However, it has been subject to various attempts to refine and improve it. In 1975, Redlich himself published an equation of state adding a third parameter, in order to better model the behavior of both long-chained molecules, as well as more polar molecules. His 1975 equation was not so much a modification to the original equation as a re-inventing of a new equation of state, and was also formulated so as to take advantage of computer calculation, which was not available at the time the original equation was published. [ 12 ] Many others have offered competing equations of state, either modifications to the original equation, or equations quite different in form. It was recognized by the mid 1960s that to significantly improve the equation, the parameters, especially a , would need to become temperature dependent. As early as 1966, Barner noted that the Redlich–Kwong equation worked best for molecules with an acentric factor (ω) close to zero. He therefore proposed a modification to the attractive term: a = α + γ T − 1.5 {\displaystyle a=\alpha +\gamma \,T^{-1.5}} where It soon became desirable to obtain an equation that would also model well the Vapor–liquid equilibrium (VLE) properties of fluids, in addition to the vapor-phase properties. [ 10 ] Perhaps the best known application of the Redlich–Kwong equation was in calculating gas fugacities of hydrocarbon mixtures, which it does well, that was then used in the VLE model developed by Chao and Seader in 1961. [ 10 ] [ 14 ] However, in order for the Redlich–Kwong equation to stand on its own in modeling vapor–liquid equilibria, more substantial modifications needed to be made. The most successful of these modifications is the Soave modification to the equation, proposed in 1972. [ 15 ] Soave's modification involved replacing the T 1/2 power found in the denominator attractive term of the original equation with a more complicated temperature-dependent expression. He presented the equation as follows: P = R T V m − b − a α V m ( V m + b ) {\displaystyle P={\frac {R\,T}{V_{m}-b}}-{\frac {a\,\alpha }{V_{m}(V_{m}+b)}}} where The Peng–Robinson equation of state further modified the Redlich–Kwong equation by modifying the attractive term, giving p = R T V m − b − a α V m ( V m + b ) + b ( V m − b ) {\displaystyle p={\frac {R\,T}{V_{m}-b}}-{\frac {a\,\alpha }{V_{m}\,(V_{m}+b)+b\,(V_{m}-b)}}} the parameters a , b , and α are slightly modified, with [ 16 ] a = 0.457235 R 2 T c 2 p c b = 0.077796 R T c p c α = ( 1 + ( 0.37464 + 1.54226 ω − 0.26992 ω 2 ) ( 1 − T r ) ) 2 {\displaystyle {\begin{aligned}a&={\frac {0.457235\,R^{2}\,T_{c}^{2}}{p_{c}}}\\b&={\frac {0.077796\,R\,T_{c}}{p_{c}}}\\\alpha &=\left(1+(0.37464+1.54226\omega -0.26992\omega ^{2})(1-{\sqrt {T_{r}}})\right)^{2}\end{aligned}}} The Peng–Robinson equation typically gives similar VLE equilibria properties as the Soave modification, but often gives better estimations of the liquid phase density . [ 10 ] Several modifications have been made that attempt to more accurately represent the first term, related to the molecular size. The first significant modification of the repulsive term beyond the Van der Waals equation 's P hs = R T V m − b = R T V m 1 1 − b V m {\displaystyle P_{\text{hs}}={\frac {R\,T}{V_{m}-b}}={\frac {R\,T}{V_{m}}}\,{\frac {1}{1-{\frac {b}{V_{m}}}}}} (where P hs represents a hard spheres equation of state term.) was developed in 1963 by Thiele: [ 17 ] P hs = R T V m 1 − η 3 ( 1 − η ) 4 {\displaystyle P_{\text{hs}}={\frac {R\,T}{V_{m}}}\,{\frac {1-\eta ^{3}}{(1-\eta )^{4}}}} where η = b 4 V m . {\displaystyle \eta ={\frac {b}{4\,V_{m}}}.} This expression was improved by Carnahan and Starling to give [ 18 ] P hs = R T V m 1 + η + η 2 − η 3 ( 1 − η ) 3 {\displaystyle P_{\text{hs}}={\frac {R\,T}{V_{m}}}\,{\frac {1+\eta +\eta ^{2}-\eta ^{3}}{{\left(1-\eta \right)}^{3}}}} The Carnahan-Starling hard-sphere equation of state has term been used extensively in developing other equations of state, [ 10 ] and tends to give very good approximations for the repulsive term. [ 19 ] Beyond improved two-parameter equations of state, a number of three parameter equations have been developed, often with the third parameter depending on either Z c , the compressibility factor at the critical point, or ω, the acentric factor. Schmidt and Wenzel proposed an equation of state with an attractive term that incorporates the acentric factor: [ 20 ] P = R T V m − b − a V m 2 + ( 1 + 3 ω ) b V m − 3 ω b 2 {\displaystyle P={\frac {R\,T}{V_{m}-b}}-{\frac {a}{V_{m}^{2}+(1+3\,\omega )bV_{m}-3\omega b^{2}}}} This equation reduces to the original Redlich–Kwong equation in the case when ω = 0 , and to the Peng–Robinson equation when ω = 1/3 .
https://en.wikipedia.org/wiki/Redlich–Kwong_equation_of_state
Redmi Note 8 is the series of Android -based smartphones as part of the Redmi Note series by Redmi , a sub-brand of Xiaomi Inc . Redmi Note 8 and Note 8 Pro were released on 29 August 2019 in an event held in China . [ 1 ] [ 2 ] The Redmi Note 8 Pro is the first smartphone to be equipped with a 64-megapixel camera. [ 2 ] The Note 8 Pro was released in Italy on 23 September 2019. [ 3 ] Later on November 6, 2019, Redmi Note 8T with NFC support, the absence of an LED indicator, and bigger bezels compared to the Redmi Note 8, was released. Also On May 24, 2021, the Redmi Note 8 2021 (sometimes is written as Redmi Note 8 (2021) ), which has a CPU from the Redmi Note 9 and newer software version, was released. [ 4 ] The smartphone was succeeded by Redmi Note 9 Pro with a significant upgrade in 2020. [ 5 ] 1080p at 60 fps, 720p at 30 fps 720p at 30 fps 1080p at 60 fps, 720p at 30 fps [ 7 ] [ 8 ] [ 9 ] [ 10 ] The Redmi Note 8 and Note 8 2021 measures 158.3 mm × 75.3 mm × 8.4 mm (6.23 in × 2.96 in × 0.33 in), for the Xiaomi Redmi Note 8T, measures 165.5 mm × 76.7 mm × 8.8 mm (6.52 in × 3.02 in × 0.35 in). Both of them weighs 209 grams (7.4 oz). Redmi Note 8, Note 8T and Note 8 2021 have flat back while Redmi Note 8 Pro has back with curved sides. The front and back are made of Gorilla Glass 5, and the frame is made of plastic. The bezels, which are always painted in black, are small (except the chin at the bottom), however, the phone does wobble because of a camera bump raising the phone. It is also splash-proof and has a P2i water-repellent coating. There is a back-mounted fingerprint sensor in this phone. [ 11 ] This technology-related article is a stub . You can help Wikipedia by expanding it . This PDA-related article is a stub . You can help Wikipedia by expanding it . This mobile technology –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Redmi_Note_8
Redox ( / ˈ r ɛ d ɒ k s / RED -oks , / ˈ r iː d ɒ k s / REE -doks , reduction–oxidation [ 2 ] or oxidation–reduction [ 3 ] : 150 ) is a type of chemical reaction in which the oxidation states of the reactants change. [ 4 ] Oxidation is the loss of electrons or an increase in the oxidation state, while reduction is the gain of electrons or a decrease in the oxidation state. The oxidation and reduction processes occur simultaneously in the chemical reaction. There are two classes of redox reactions: "Redox" is a portmanteau of the words "REDuction" and "OXidation." The term "redox" was first used in 1928. [ 6 ] Oxidation is a process in which a substance loses electrons. Reduction is a process in which a substance gains electrons. The processes of oxidation and reduction occur simultaneously and cannot occur independently. [ 5 ] In redox processes, the reductant transfers electrons to the oxidant. Thus, in the reaction, the reductant or reducing agent loses electrons and is oxidized, and the oxidant or oxidizing agent gains electrons and is reduced. The pair of an oxidizing and reducing agent that is involved in a particular reaction is called a redox pair. A redox couple is a reducing species and its corresponding oxidizing form, [ 7 ] e.g., Fe 2+ / Fe 3+ .The oxidation alone and the reduction alone are each called a half-reaction because two half-reactions always occur together to form a whole reaction. [ 5 ] In electrochemical reactions the oxidation and reduction processes do occur simultaneously but are separated in space. Oxidation originally implied a reaction with oxygen to form an oxide. Later, the term was expanded to encompass substances that accomplished chemical reactions similar to those of oxygen. Ultimately, the meaning was generalized to include all processes involving the loss of electrons or the increase in the oxidation state of a chemical species. [ 8 ] : A49 Substances that have the ability to oxidize other substances (cause them to lose electrons) are said to be oxidative or oxidizing, and are known as oxidizing agents , oxidants, or oxidizers. The oxidant removes electrons from another substance, and is thus itself reduced. [ 8 ] : A50 Because it "accepts" electrons, the oxidizing agent is also called an electron acceptor . Oxidants are usually chemical substances with elements in high oxidation states [ 3 ] : 159 (e.g., N 2 O 4 , MnO − 4 , CrO 3 , Cr 2 O 2− 7 , OsO 4 ), or else highly electronegative elements (e.g. O 2 , F 2 , Cl 2 , Br 2 , I 2 ) that can gain extra electrons by oxidizing another substance. [ 3 ] : 909 Oxidizers are oxidants, but the term is mainly reserved for sources of oxygen, particularly in the context of explosions. Nitric acid is a strong oxidizer. [ 9 ] Substances that have the ability to reduce other substances (cause them to gain electrons) are said to be reductive or reducing and are known as reducing agents , reductants, or reducers. The reductant transfers electrons to another substance and is thus itself oxidized. [ 3 ] : 159 Because it donates electrons, the reducing agent is also called an electron donor . Electron donors can also form charge transfer complexes with electron acceptors. The word reduction originally referred to the loss in weight upon heating a metallic ore such as a metal oxide to extract the metal. In other words, ore was "reduced" to metal. [ 10 ] Antoine Lavoisier demonstrated that this loss of weight was due to the loss of oxygen as a gas. Later, scientists realized that the metal atom gains electrons in this process. The meaning of reduction then became generalized to include all processes involving a gain of electrons. [ 10 ] Reducing equivalent refers to chemical species which transfer the equivalent of one electron in redox reactions. The term is common in biochemistry . [ 11 ] A reducing equivalent can be an electron or a hydrogen atom as a hydride ion . [ 12 ] Reductants in chemistry are very diverse. Electropositive elemental metals , such as lithium , sodium , magnesium , iron , zinc , and aluminium , are good reducing agents. These metals donate electrons relatively readily. [ 13 ] Hydride transfer reagents , such as NaBH 4 and LiAlH 4 , reduce by atom transfer: they transfer the equivalent of hydride or H − . These reagents are widely used in the reduction of carbonyl compounds to alcohols . [ 14 ] [ 15 ] A related method of reduction involves the use of hydrogen gas (H 2 ) as sources of H atoms. [ 3 ] : 288 Redox reactions can occur slowly, as in the formation of rust , or rapidly, as in the case of burning fuel . Electron transfer reactions are generally fast, occurring within the time of mixing. [ 16 ] The mechanisms of atom-transfer reactions are highly variable because many kinds of atoms can be transferred. Such reactions can also be quite complex, involving many steps. The mechanisms of electron-transfer reactions occur by two distinct pathways, inner sphere electron transfer [ 17 ] and outer sphere electron transfer . [ 18 ] Analysis of bond energies and ionization energies in water allows calculation of the thermodynamic aspects of redox reactions. [ 19 ] Each half-reaction has a standard electrode potential ( E o cell ), which is equal to the potential difference or voltage at equilibrium under standard conditions of an electrochemical cell in which the cathode reaction is the half-reaction considered, and the anode is a standard hydrogen electrode where hydrogen is oxidized: [ 20 ] The electrode potential of each half-reaction is also known as its reduction potential ( E o red ), or potential when the half-reaction takes place at a cathode. The reduction potential is a measure of the tendency of the oxidizing agent to be reduced. Its value is zero for H + + e − → 1 ⁄ 2 H 2 by definition, positive for oxidizing agents stronger than H + (e.g., +2.866 V for F 2 ) and negative for oxidizing agents that are weaker than H + (e.g., −0.763V for Zn 2+ ). [ 8 ] : 873 For a redox reaction that takes place in a cell, the potential difference is: However, the potential of the reaction at the anode is sometimes expressed as an oxidation potential : The oxidation potential is a measure of the tendency of the reducing agent to be oxidized but does not represent the physical potential at an electrode. With this notation, the cell voltage equation is written with a plus sign In the reaction between hydrogen and fluorine , hydrogen is being oxidized and fluorine is being reduced: This spontaneous reaction releases a large amount of energy (542 kJ per 2 g of hydrogen) because two H-F bonds are much stronger than one H-H bond and one F-F bond. This reaction can be analyzed as two half-reactions . The oxidation reaction converts hydrogen to protons : The reduction reaction converts fluorine to the fluoride anion: The half-reactions are combined so that the electrons cancel: The protons and fluoride combine to form hydrogen fluoride in a non-redox reaction: The overall reaction is: In this type of reaction, a metal atom in a compound or solution is replaced by an atom of another metal. For example, copper is deposited when zinc metal is placed in a copper(II) sulfate solution: In the above reaction, zinc metal displaces the copper(II) ion from the copper sulfate solution, thus liberating free copper metal. The reaction is spontaneous and releases 213 kJ per 65 g of zinc. The ionic equation for this reaction is: As two half-reactions , it is seen that the zinc is oxidized: And the copper is reduced: A disproportionation reaction is one in which a single substance is both oxidized and reduced. For example, thiosulfate ion with sulfur in oxidation state +2 can react in the presence of acid to form elemental sulfur (oxidation state 0) and sulfur dioxide (oxidation state +4). Thus one sulfur atom is reduced from +2 to 0, while the other is oxidized from +2 to +4. [ 8 ] : 176 Cathodic protection is a technique used to control the corrosion of a metal surface by making it the cathode of an electrochemical cell . A simple method of protection connects protected metal to a more easily corroded " sacrificial anode " to act as the anode . The sacrificial metal, instead of the protected metal, then corrodes. Oxidation is used in a wide variety of industries, such as in the production of cleaning products and oxidizing ammonia to produce nitric acid . [ citation needed ] Redox reactions are the foundation of electrochemical cells, which can generate electrical energy or support electrosynthesis . Metal ores often contain metals in oxidized states, such as oxides or sulfides, from which the pure metals are extracted by smelting at high temperatures in the presence of a reducing agent. The process of electroplating uses redox reactions to coat objects with a thin layer of a material, as in chrome-plated automotive parts, silver plating cutlery , galvanization and gold-plated jewelry . [ citation needed ] Many essential biological processes involve redox reactions. Before some of these processes can begin, iron must be assimilated from the environment. [ 21 ] Cellular respiration , for instance, is the oxidation of glucose (C 6 H 12 O 6 ) to CO 2 and the reduction of oxygen to water . The summary equation for cellular respiration is: The process of cellular respiration also depends heavily on the reduction of NAD + to NADH and the reverse reaction (the oxidation of NADH to NAD + ). Photosynthesis and cellular respiration are complementary, but photosynthesis is not the reverse of the redox reaction in cellular respiration: Biological energy is frequently stored and released using redox reactions. Photosynthesis involves the reduction of carbon dioxide into sugars and the oxidation of water into molecular oxygen. The reverse reaction, respiration, oxidizes sugars to produce carbon dioxide and water. As intermediate steps, the reduced carbon compounds are used to reduce nicotinamide adenine dinucleotide (NAD + ) to NADH, which then contributes to the creation of a proton gradient , which drives the synthesis of adenosine triphosphate (ATP) and is maintained by the reduction of oxygen. In animal cells, mitochondria perform similar functions. The term redox state is often used to describe the balance of GSH/GSSG , NAD + /NADH and NADP + /NADPH in a biological system such as a cell or organ . The redox state is reflected in the balance of several sets of metabolites (e.g., lactate and pyruvate , beta-hydroxybutyrate and acetoacetate ), whose interconversion is dependent on these ratios. Redox mechanisms also control some cellular processes. Redox proteins and their genes must be co-located for redox regulation according to the CoRR hypothesis for the function of DNA in mitochondria and chloroplasts . Wide varieties of aromatic compounds are enzymatically reduced to form free radicals that contain one more electron than their parent compounds. In general, the electron donor is any of a wide variety of flavoenzymes and their coenzymes . Once formed, these anion free radicals reduce molecular oxygen to superoxide and regenerate the unchanged parent compound. The net reaction is the oxidation of the flavoenzyme's coenzymes and the reduction of molecular oxygen to form superoxide. This catalytic behavior has been described as a futile cycle or redox cycling. Minerals are generally oxidized derivatives of metals. Iron is mined as ores such as magnetite (Fe 3 O 4 ) and hematite (Fe 2 O 3 ). Titanium is mined as its dioxide, usually in the form of rutile (TiO 2 ). These oxides must be reduced to obtain the corresponding metals, often achieved by heating these oxides with carbon or carbon monoxide as reducing agents. Blast furnaces are the reactors where iron oxides and coke (a form of carbon) are combined to produce molten iron. The main chemical reaction producing the molten iron is: [ 22 ] Electron transfer reactions are central to myriad processes and properties in soils, and redox potential , quantified as Eh (platinum electrode potential ( voltage ) relative to the standard hydrogen electrode) or pe (analogous to pH as −log electron activity), is a master variable, along with pH, that controls and is governed by chemical reactions and biological processes. Early theoretical research with applications to flooded soils and paddy rice production was seminal for subsequent work on thermodynamic aspects of redox and plant root growth in soils. [ 23 ] Later work built on this foundation, and expanded it for understanding redox reactions related to heavy metal oxidation state changes, pedogenesis and morphology, organic compound degradation and formation, free radical chemistry, wetland delineation, soil remediation , and various methodological approaches for characterizing the redox status of soils. [ 24 ] [ 25 ] The key terms involved in redox can be confusing. [ 26 ] [ 27 ] For example, a reagent that is oxidized loses electrons; however, that reagent is referred to as the reducing agent. Likewise, a reagent that is reduced gains electrons and is referred to as the oxidizing agent. [ 28 ] These mnemonics are commonly used by students to help memorise the terminology: [ 29 ]
https://en.wikipedia.org/wiki/Redox
A redox gradient is a series of reduction-oxidation ( redox ) reactions sorted according to redox potential . [ 4 ] [ 5 ] The redox ladder displays the order in which redox reactions occur based on the free energy gained from redox pairs. [ 4 ] [ 5 ] [ 6 ] These redox gradients form both spatially and temporally as a result of differences in microbial processes, chemical composition of the environment, and oxidative potential. [ 5 ] [ 4 ] Common environments where redox gradients exist are coastal marshes , lakes , contaminant plumes, and soils . [ 1 ] [ 4 ] [ 5 ] [ 6 ] The Earth has a global redox gradient with an oxidizing environment at the surface and increasingly reducing conditions below the surface. [ 4 ] Redox gradients are generally understood at the macro level, but characterization of redox reactions in heterogeneous environments at the micro-scale require further research and more sophisticated measurement techniques. [ 5 ] [ 1 ] [ 7 ] [ 6 ] Redox conditions are measured according to the redox potential (E h ) in volts, which represents the tendency for electrons to transfer from an electron donor to an electron acceptor . E h can be calculated using half reactions and the Nernst equation . [ 1 ] An E h of zero represents the redox couple of the standard hydrogen electrode H + /H 2, [ 8 ] a positive E h indicates an oxidizing environment (electrons will be accepted), and a negative E h indicates a reducing environment (electrons will be donated). [ 1 ] In a redox gradient, the most energetically favorable chemical reaction occurs at the "top" of the redox ladder and the least energetically favorable reaction occurs at the "bottom" of the ladder. [ 1 ] E h can be measured by collecting samples in the field and performing analyses in the lab, or by inserting an electrode into the environment to collect in situ measurements. [ 6 ] [ 5 ] [ 1 ] Typical environments to measure redox potential are in bodies of water, soils, and sediments, all of which can exhibit high levels of heterogeneity. [ 5 ] [ 1 ] Collecting a high number of samples can produce high spatial resolution, but at the cost of low temporal resolution since samples only reflect a singular a snapshot in time. [ 8 ] [ 1 ] [ 5 ] In situ monitoring can provide high temporal resolution by collecting continuous real-time measurements, but low spatial resolution since the electrode is in a fixed location. [ 1 ] [ 5 ] Redox properties can also be tracked with high spatial and temporal resolution through the use of induced-polarization imaging, however, further research is needed to fully understand contributions of redox species to polarization. [ 6 ] Redox gradients are commonly found in the environment as functions of both space and time, [ 9 ] [ 8 ] particularly in soils and aquatic environments. [ 8 ] [ 6 ] Gradients are caused by varying physiochemical properties including availability of oxygen, soil hydrology, chemical species present, and microbial processes. [ 1 ] [ 4 ] [ 9 ] [ 8 ] Specific environments that are commonly characterized by redox gradients include waterlogged soils , wetlands , [ 8 ] contaminant plumes, [ 9 ] [ 4 ] and marine pelagic and hemipelagic sediments. [ 4 ] The following is a list of common reactions that occur in the environment in order from oxidizing to reducing (organisms performing the reaction in parentheses): [ 1 ] Redox gradients form in water columns and their sediments. Varying levels of oxygen (oxic, suboxic, hypoxic ) within the water column alter redox chemistry and which redox reactions can occur. [ 10 ] Development of oxygen minimum zones also contributes to formation of redox gradients. Benthic sediments exhibit redox gradients produced by variations in mineral composition, organic matter availability, structure, and sorption dynamics. [ 5 ] Limited transport of dissolved electrons through subsurface sediments, combined with varying pore sizes of sediments creates significant heterogeneity in benthic sediments. [ 5 ] Oxygen availability in sediments determines which microbial respiration pathways can occur, resulting in a vertical stratification of redox processes as oxygen availability decreases with depth. [ 5 ] Soil E h is also largely a function of hydrological conditions. [ 1 ] [ 8 ] [ 6 ] In the event of a flood, saturated soils can shift from oxic to anoxic, creating a reducing environment as anaerobic microbial processes dominate. [ 1 ] [ 8 ] Moreover, small anoxic hotspots may develop within soil pore spaces, creating reducing conditions. [ 6 ] With time, the starting E h of a soil can be restored as water drains and the soil dries out. [ 1 ] [ 8 ] Soils with redox gradients formed by ascending groundwater are classified as gleysols , while soils with gradients formed by stagnant water are classified as stagnosols and planosols . Soil E h generally ranges from −300 to +900 mV. [ 8 ] The table below summarizes typical E h values for various soil conditions: [ 1 ] [ 8 ] Generally accepted E h limits that are tolerable by plants are +300 mV < E h < +700 mV. [ 8 ] 300 mV is the boundary value that separates aerobic from anaerobic conditions in wetland soils. [ 1 ] Redox potential (E h ) is also closely tied to pH , and both have significant influence on the function of soil-plant-microorganism systems. [ 1 ] [ 8 ] The main source of electrons in soil is organic matter . [ 8 ] Organic matter consumes oxygen as it decomposes, resulting in reducing soil conditions and lower E h . [ 8 ] Redox gradients form based on resource availability and physiochemical conditions (pH, salinity, temperature) and support stratified communities of microbes . [ 1 ] [ 5 ] [ 9 ] [ 8 ] [ 7 ] Microbes carry out differing respiration processes ( methanogenesis , sulfate reduction, etc.) based on the conditions around them and further amplify redox gradients present in the environment. [ 9 ] [ 1 ] [ 8 ] However, distribution of microorganisms cannot solely be determined from thermodynamics (redox ladder), but is also influenced by ecological and physiological factors. [ 6 ] [ 5 ] Redox gradients form along contaminant plumes, in both aquatic and terrestrial settings, as a function of the contaminant concentration and the impacts it has on relevant chemical processes and microbial communities. [ 1 ] [ 9 ] The highest rates of organic pollutant degradation along a redox gradient are found at the oxic-anoxic interface. [ 1 ] In groundwater, this oxic-anoxic environment is referred to as the capillary fringe , where the water table meets soil and fills empty pores. Because this transition zone is both oxic and anoxic, electron acceptors and donors are in high abundance and there is a high level of microbial activity, leading to the highest rates of contaminant biodegradation. [ 1 ] [ 9 ] Benthic sediments are heterogeneous in nature and subsequently exhibit redox gradients. [ 5 ] Due to this heterogeneity, gradients of reducing and oxidizing chemical species do not always overlap enough to support electron transport needs of niche microbial communities. [ 5 ] Cable bacteria have been characterized as sulfide-oxidizing bacteria that assist in connecting these areas of undersupplied and excess electrons to complete the electron transport for otherwise unavailable redox reactions. [ 5 ] Biofilms , found in tidal flats , glaciers , hydrothermal vents , and at the bottoms of aquatic environments, also exhibit redox gradients. [ 5 ] The community of microbes—often metal- or sulfate-reducing bacteria —produces redox gradients on the micrometer scale as a function of spatial physiochemical variability. [ 5 ] See sulfate-methane transition zone for coverage of microbial processes in SMTZs.
https://en.wikipedia.org/wiki/Redox_gradient
A redox indicator (also called an oxidation-reduction indicator ) is an indicator which undergoes a definite color change at a specific electrode potential . The requirement for fast and reversible color change means that the oxidation-reduction equilibrium for an indicator redox system needs to be established very quickly. Therefore, only a few classes of organic redox systems can be used for indicator purposes. [ 1 ] There are two common classes of redox indicators: The most common redox indicator are organic compounds . Redox Indicator example: The molecule 2,2'- Bipyridine is a redox Indicator. In solution, it changes from light blue to red at an electrode potential of 0.97 V. at pH=0 at pH=7 Oxidized form Reduced form or Sodium 2,6-Dichlorophenol-indophenol (syn. Indigodi sulfonic acid
https://en.wikipedia.org/wiki/Redox_indicator
A redox titration [ 1 ] is a type of titration based on a redox reaction between the analyte and titrant. It may involve the use of a redox indicator and/or a potentiometer. A common example of a redox titration is the treatment of a solution of iodine with a reducing agent to produce iodide using a starch indicator to help detect the endpoint. Iodine (I 2 ) can be reduced to iodide (I − ) by, say, thiosulfate ( S 2 O 2− 3 ), and when all the iodine is consumed, the blue colour disappears. This is called an iodometric titration. Most often, the reduction of iodine to iodide is the last step in a series of reactions where the initial reactions convert an unknown amount of the solute (the substance being analyzed) to an equivalent amount of iodine, which may then be titrated. Sometimes other halogens (or haloalkanes) besides iodine are used in the intermediate reactions because they are available in better measurable standard solutions and/or react more readily with the solute. The extra steps in iodometric titration may be worthwhile because the equivalence point , where the blue turns a bit colourless, is more distinct than in some other analytical or volumetric methods. The main redox titration types are:
https://en.wikipedia.org/wiki/Redox_titration
Redshift is a techno-economic theory suggesting hypersegmentation [ clarification needed ] of information technology markets based on whether individual computing needs are over or under-served by Moore's law , which predicts the doubling of computing transistors (and therefore roughly computing power) every two years. The theory, proposed and named by New Enterprise Associates partner and former Sun Microsystems CTO Greg Papadopoulos , categorized a series of high growth markets (redshifting) while predicting slower GDP-driven growth in traditional computing markets (blueshifting). Papadopoulos predicted the result will be a fundamental redesign of components comprising computing systems. According to the Redshift theory, applications "redshift" when they grow dramatically faster than Moore's Law allows, growing quickly in their absolute number of systems. [ 1 ] In these markets, customers are running out of datacenter real-estate, power and cooling infrastructure. [ 2 ] According to Dell Senior Vice President Brad Anderson, “Businesses requiring hyperscale computing environments – where infrastructure deployments are measured by up to millions of servers, storage and networking equipment – are changing the way they approach IT.” [ 3 ] While various Redshift proponents offer minor alterations on the original presentation, “Redshifting” generally includes: [ 1 ] These are companies that drive heavy Internet traffic. This includes popular web-portals like Google , Yahoo , AOL and MSN . It also includes telecoms , multimedia, television over IP, online games like World of Warcraft and others. [ which? ] This segment has been enabled by widespread availability of high-bandwidth Internet connections to consumers through a DSL or cable modem . A simple way to understand this market is that for every byte of content served to a PC, mobile phone or other device over a network, there must exist computing systems to send it over the network. These are companies that do complex simulations that involve (for example) weather, stock markets or drug-design simulations. This is a generally elastic market because businesses frequently spend every "available" dollar budgeted for IT. A common anecdote claims that cutting the cost of computing by half causes customers in this segment to buy at least twice as much, because each marginal IT dollar spent contributes to business advantage. These are companies that aggregate traditional computing applications and offer them as services, typically in the form of Software as a Service (SaaS) . For example, companies that deploy CRM are over-served by Moore's Law, but companies that aggregate CRM functions and offer them as a service, such as Salesforce.com , grow faster than Moore's Law. A prime example of redshift was a crisis at eBay . In 1999 eBay suffered a database crisis when a single Oracle Database running on the fastest Sun machine available (these tracking Moore's law in this period) was not enough to cope with eBay's growth. The solution was to massively parallelise their system architecture. [ 4 ] [ 5 ] Redshift theory suggests that traditional computing markets, such as those serving enterprise resource planning or customer relationship management applications, have reached relative saturation in industrialized nations. Thereafter, proponents argued further market growth will closely follow gross domestic product growth, which typically remains under 10% for most countries annually. Given that Moore's Law continues to predict accurately the rate of computing transistor growth, which roughly translates into computing power doubling every two years, the Redshift theory suggests that traditional computing markets will ultimately contract as a percentage of computing expenditures over time. [ 1 ] Functionally, this means “Blueshifting” customers can satisfy computing requirement growth by swapping in faster processors without increasing the absolute number of computing systems. Papadopoulos argued that while traditional computing markets remain the dominant source of revenue through the late 2000s, a shift to hypergrowth markets will inevitably occur. When that shift occurs, he argued computing (but not computers) will become a utility, and differentiation in the IT market will be based upon a company's ability to deliver computing at massive scale, efficiently and with predictable service levels, much like electricity at that time. [ 1 ] If computing is to be delivered as a utility, Nicholas Carr suggested Papadopoulos' vision compares with Microsoft researcher Jim Hamilton, who both agree that computing is most efficiently generated in shipping containers . [ 6 ] Industry analysts are also beginning to quantify Redshifting and Blueshifting markets. According to International Data Corporation vice president Matthew Eastwood, "IDC believes that the IT market is in a period of hyper segmentation... This a class of customers that is Moore's law driven and as price performance gains continue, IDC believes that these organizations will accelerate their consumption of IT infrastructure.” [ 7 ] Key portions of Papadopoulos' theory were first presented by Sun Microsystems CEO Jonathan Schwartz in late 2006. [ 8 ] Papadopoulos later gave a full presentation on Redshift to Sun's annual Analyst Summit [ 1 ] in February 2007. The term Redshift refers to what happens when electromagnetic radiation, usually visible light, moves away from an observer. Papadopoulos chose this term to reflect growth markets because redshift helped cosmologists explain the expansion of the universe. Papadopoulos originally depicted traditional IT markets as green to represent their revenue base, but later changed them to “blueshift,” which occurs when a light source moves toward an observer, similar to what would happen during a contraction of the universe. [ 9 ]
https://en.wikipedia.org/wiki/Redshift_(theory)
Redtail Telematics is a provider of GPS enabled fleet tracking products. The company is based in North America with headquarters in the United Kingdom. Redtail's products use a technology solution known as VAM (Vehicle Asset Management) which includes features such as: 1) sensors 2) Onboard diagnostics 3) Vehicle status engine detection [ 1 ] 4) GPS jamming/tamper protection alerts The company's technology is provided by Plextek Limited, with which it has a strategic design partnership. [ 2 ] As of 2015, it has partnered with insurance provider Admiral . [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Redtail_Telematics_Corporation
In mathematics , the reduced derivative is a generalization of the notion of derivative that is well-suited to the study of functions of bounded variation . Although functions of bounded variation have derivatives in the sense of Radon measures , it is desirable to have a derivative that takes values in the same space as the functions themselves. Although the precise definition of the reduced derivative is quite involved, its key properties are quite easy to remember: The notion of reduced derivative appears to have been introduced by Alexander Mielke and Florian Theil in 2004. Let X be a separable , reflexive Banach space with norm || || and fix T > 0. Let BV − ([0, T ]; X ) denote the space of all left-continuous functions z : [0, T ] → X with bounded variation on [0, T ]. For any function of time f , use subscripts +/− to denote the right/left continuous versions of f , i.e. For any sub-interval [ a , b ] of [0, T ], let Var( z , [ a , b ]) denote the variation of z over [ a , b ], i.e., the supremum The first step in the construction of the reduced derivative is the "stretch" time so that z can be linearly interpolated at its jump points. To this end, define The "stretched time" function τ̂ is left-continuous (i.e. τ̂ = τ̂ − ); moreover, τ̂ − and τ̂ + are strictly increasing and agree except at the (at most countable) jump points of z . Setting T̂ = τ̂ ( T ), this "stretch" can be inverted by Using this, the stretched version of z is defined by where θ ∈ [0, 1] and The effect of this definition is to create a new function ẑ which "stretches out" the jumps of z by linear interpolation. A quick calculation shows that ẑ is not just continuous, but also lies in a Sobolev space : The derivative of ẑ ( τ ) with respect to τ is defined almost everywhere with respect to Lebesgue measure . The reduced derivative of z is the pull-back of this derivative by the stretching function τ̂ : [0, T ] → [0, T̂ ]. In other words, Associated with this pull-back of the derivative is the pull-back of Lebesgue measure on [0, T̂ ], which defines the differential measure μ z :
https://en.wikipedia.org/wiki/Reduced_derivative
In biophysics and related fields, reduced dimension forms (RDFs) are unique on-off mechanisms for random walks that generate two-state trajectories (see Fig. 1 for an example of a RDF and Fig. 2 for an example of a two-state trajectory). It has been shown that RDFs solve two-state trajectories, since only one RDF can be constructed from the data, [ 1 ] where this property does not hold for on-off kinetic schemes, where many kinetic schemes can be constructed from a particular two-state trajectory (even from an ideal on-off trajectory). Two-state time trajectories are very common in measurements in chemistry, physics, and the biophysics of individual molecules [ 2 ] [ 3 ] (e.g. measurements of protein dynamics and DNA and RNA dynamics , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] activity of ion channels , [ 11 ] [ 12 ] [ 13 ] enzyme activity , [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] quantum dots [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] ), thus making RDFs an important tool in the analysis of data in these fields. Since RDFs are uniquely obtained from the data, [ 33 ] [ 34 ] they have many advantages over other mathematical and statistical methods that were developed for solving two-state trajectories. [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] A RDF is a lattice of substates, each substate represents either the on state or the off state, and has a particular number (see Figure 1). The connections are only among substates of different states. A simulation of an on-off trajectory from a RDF is made with a generalized Gillespie algorithm , where here a random jumping time is first taken from density functions that are (usually) not exponential using the rejection method , and then the specific next substate is chosen according to the jumping probabilities that are determined from the jumping time probability density functions. A RDF can have irreversible connections, yet, it generates an on-off trajectory that has the property of microscopic reversibility , meaning that the physical system fluctuates around equilibrium. A two-state trajectory is a fluctuating signal made of on periods and off periods; an on period, and then an off period, and so on (see, Fig. 2). In most cases where this signal appears in applications in science, the trajectory is random; that is, the length of the on and off periods changes, and is a random quantity. There may be correlations in the trajectory; e.g., when we see a short off period and the next on period is relatively long (that is, long with a large probability), we say that there are off-on correlations. In principle, there are 4 independent types of correlations in two-state trajectories: on-on, on-off, off-on, and off-off. Two-state trajectories can be obtained from on-off kinetic schemes , RDFs, or any other stochastic equation of motion (with a clear on-off definition). In experiments from individual molecules , two-state trajectories are common, where from the trajectory we aim at finding the right model of the process. [ 48 ] It was shown in Ref. 1 [ 1 ] that RDFs are unique is the sense that a particular RDF generates a particular time trajectory (in a statistical sense), and a time trajectory is associated with only one RDF. This property does not hold for on-off kinetic schemes, where from a trajectory several kinetic schemes can be constructed; see for example,. [ 1 ] RDFs are also constructed more reliably from the data than kinetic schemes. [ 33 ] Figure 3 illustrates RDFs, kinetic schemes and two-state trajectories, and the relations among these. Given a two-state trajectory (generated from any mechanism), it is safer to go from the data and construct a RDF, rather than trying to construct the kinetic scheme from the data directly. With the constructed RDF, one can find several possible kinetic schemes very accurately (usually, one eventually tries constructing a kinetic scheme from the data), where these kinetic schemes are all equivalent (with regard to the data).
https://en.wikipedia.org/wiki/Reduced_dimensions_form
Reduced frequency is the dimensionless number used in general for the case of unsteady aerodynamics and aeroelasticity . It is one of the parameters that defines the degree of unsteadiness of the problem. [ 1 ] For the case of flutter analysis, lift history for the motion obtained from the Wagner analysis ( Herbert A. Wagner ) with varying frequency of oscillation shows that magnitude of lift decreases and a phase lag develops between the aircraft motion and the unsteady aerodynamic forces. Reduced frequency can be used to explain the amplitude attenuation and the phase lag of the unsteady aerodynamic forces compared to the quasi steady analysis (which in theory assumes no phase lag). [ 2 ] Reduced frequency is denoted by the letter "k" and given by the expression k = ( ω × b ) / V {\displaystyle k=(\omega \times b)/V} where: The semi-chord is used instead of the chord due to its use in the derivation of unsteady lift based on thin airfoil theory. [ 3 ] Based on the value of reduced frequency "k", we can roughly divide the flow into:
https://en.wikipedia.org/wiki/Reduced_frequency
In surveying , reduced level ( RL ) refers to equating elevations of survey points with reference to a common assumed vertical datum . It is a vertical distance between survey point and adopted datum surface. [ 1 ] Thus, it is considered as the base level which is used as reference to reckon heights or depths of other places or structures in that area, region or country. [ 2 ] The word "Reduced" here means "equating" and the word "level" means "elevation". Datum may be a real or imaginary location with a nominated elevation. [ 3 ] The most common and convenient datum which is internationally accepted is mean sea level which is a universal measure and based upon a common base line in the whole world determined by earth's gravitational model (see geoid ) that gives the standard to measure elevation of a place above or below mean sea level. Countries adopt their nearby mean sea levels as datum planes for calculations of reduced levels in their respective jurisdictions. For example, Pakistan takes sea near Karachi as its datum while India takes sea near Mumbai as its datum for calculation of reduced levels. The term reduced level is denoted shortly by 'RL'. National survey departments of each country determine RLs of significantly important locations or points. These points are called permanent benchmarks and this survey process is known as Great Trigonometrical Surveying (GTS). The permanent benchmarks act as reference points for determining RLs of other locations in a particular country. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The instruments used to determine reduced level include: RL of a survey point can be determined by two methods:
https://en.wikipedia.org/wiki/Reduced_level
In physics , reduced mass is a measure of the effective inertial mass of a system with two or more particles when the particles are interacting with each other. Reduced mass allows the two-body problem to be solved as if it were a one-body problem . Note, however, that the mass determining the gravitational force is not reduced. In the computation, one mass can be replaced with the reduced mass, if this is compensated by replacing the other mass with the sum of both masses. The reduced mass is frequently denoted by μ {\displaystyle \mu } ( mu ), although the standard gravitational parameter is also denoted by μ {\displaystyle \mu } (as are a number of other physical quantities ). It has the dimensions of mass, and SI unit kg. Reduced mass is particularly useful in classical mechanics . Given two bodies, one with mass m 1 and the other with mass m 2 , the equivalent one-body problem, with the position of one body with respect to the other as the unknown, is that of a single body of mass [ 1 ] [ 2 ] μ = m 1 ∥ m 2 = 1 1 m 1 + 1 m 2 = m 1 m 2 m 1 + m 2 , {\displaystyle \mu =m_{1}\parallel m_{2}={\cfrac {1}{{\cfrac {1}{m_{1}}}+{\cfrac {1}{m_{2}}}}}={\cfrac {m_{1}m_{2}}{m_{1}+m_{2}}},} where the force on this mass is given by the force between the two bodies. The reduced mass is always less than or equal to the mass of each body: μ ≤ m 1 , μ ≤ m 2 {\displaystyle \mu \leq m_{1},\quad \mu \leq m_{2}} and has the reciprocal additive property: 1 μ = 1 m 1 + 1 m 2 {\displaystyle {\frac {1}{\mu }}={\frac {1}{m_{1}}}+{\frac {1}{m_{2}}}} which by re-arrangement is equivalent to half of the harmonic mean . In the special case that m 1 = m 2 {\displaystyle m_{1}=m_{2}} : μ = m 1 2 = m 2 2 {\displaystyle \mu ={\frac {m_{1}}{2}}={\frac {m_{2}}{2}}} If m 1 ≫ m 2 {\displaystyle m_{1}\gg m_{2}} , then μ ≈ m 2 {\displaystyle \mu \approx m_{2}} . The equation can be derived as follows. Using Newton's second law , the force exerted by a body (particle 2) on another body (particle 1) is: F 12 = m 1 a 1 {\displaystyle \mathbf {F} _{12}=m_{1}\mathbf {a} _{1}} The force exerted by particle 1 on particle 2 is: F 21 = m 2 a 2 {\displaystyle \mathbf {F} _{21}=m_{2}\mathbf {a} _{2}} According to Newton's third law , the force that particle 2 exerts on particle 1 is equal and opposite to the force that particle 1 exerts on particle 2: F 12 = − F 21 {\displaystyle \mathbf {F} _{12}=-\mathbf {F} _{21}} Therefore: m 1 a 1 = − m 2 a 2 ⇒ a 2 = − m 1 m 2 a 1 {\displaystyle m_{1}\mathbf {a} _{1}=-m_{2}\mathbf {a} _{2}\;\;\Rightarrow \;\;\mathbf {a} _{2}=-{m_{1} \over m_{2}}\mathbf {a} _{1}} The relative acceleration a rel between the two bodies is given by: a rel := a 1 − a 2 = ( 1 + m 1 m 2 ) a 1 = m 2 + m 1 m 1 m 2 m 1 a 1 = F 12 μ {\displaystyle \mathbf {a} _{\text{rel}}:=\mathbf {a} _{1}-\mathbf {a} _{2}=\left(1+{\frac {m_{1}}{m_{2}}}\right)\mathbf {a} _{1}={\frac {m_{2}+m_{1}}{m_{1}m_{2}}}m_{1}\mathbf {a} _{1}={\frac {\mathbf {F} _{12}}{\mu }}} Note that (since the derivative is a linear operator) the relative acceleration a rel {\displaystyle \mathbf {a} _{\text{rel}}} is equal to the acceleration of the separation x rel {\displaystyle \mathbf {x} _{\text{rel}}} between the two particles. a rel = a 1 − a 2 = d 2 x 1 d t 2 − d 2 x 2 d t 2 = d 2 d t 2 ( x 1 − x 2 ) = d 2 x rel d t 2 {\displaystyle \mathbf {a} _{\text{rel}}=\mathbf {a} _{1}-\mathbf {a} _{2}={\frac {d^{2}\mathbf {x} _{1}}{dt^{2}}}-{\frac {d^{2}\mathbf {x} _{2}}{dt^{2}}}={\frac {d^{2}}{dt^{2}}}\left(\mathbf {x} _{1}-\mathbf {x} _{2}\right)={\frac {d^{2}\mathbf {x} _{\text{rel}}}{dt^{2}}}} This simplifies the description of the system to one force (since F 12 = − F 21 {\displaystyle \mathbf {F} _{12}=-\mathbf {F} _{21}} ), one coordinate x rel {\displaystyle \mathbf {x} _{\text{rel}}} , and one mass μ {\displaystyle \mu } . Thus we have reduced our problem to a single degree of freedom, and we can conclude that particle 1 moves with respect to the position of particle 2 as a single particle of mass equal to the reduced mass, μ {\displaystyle \mu } . Alternatively, a Lagrangian description of the two-body problem gives a Lagrangian of L = 1 2 m 1 r ˙ 1 2 + 1 2 m 2 r ˙ 2 2 − V ( | r 1 − r 2 | ) {\displaystyle {\mathcal {L}}={1 \over 2}m_{1}\mathbf {\dot {r}} _{1}^{2}+{1 \over 2}m_{2}\mathbf {\dot {r}} _{2}^{2}-V(|\mathbf {r} _{1}-\mathbf {r} _{2}|)} where r i {\displaystyle {\mathbf {r} }_{i}} is the position vector of mass m i {\displaystyle m_{i}} (of particle i {\displaystyle i} ). The potential energy V is a function as it is only dependent on the absolute distance between the particles. If we define r = r 1 − r 2 {\displaystyle \mathbf {r} =\mathbf {r} _{1}-\mathbf {r} _{2}} and let the centre of mass coincide with our origin in this reference frame, i.e. m 1 r 1 + m 2 r 2 = 0 , {\displaystyle m_{1}\mathbf {r} _{1}+m_{2}\mathbf {r} _{2}=0,} then r 1 = m 2 r m 1 + m 2 , r 2 = − m 1 r m 1 + m 2 . {\displaystyle \mathbf {r} _{1}={\frac {m_{2}\mathbf {r} }{m_{1}+m_{2}}},\;\mathbf {r} _{2}=-{\frac {m_{1}\mathbf {r} }{m_{1}+m_{2}}}.} Then substituting above gives a new Lagrangian L = 1 2 μ r ˙ 2 − V ( r ) , {\displaystyle {\mathcal {L}}={\frac {1}{2}}\mu \mathbf {\dot {r}} ^{2}-V(r),} where μ = m 1 m 2 m 1 + m 2 {\displaystyle \mu ={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}} is the reduced mass. Thus we have reduced the two-body problem to that of one body. Reduced mass can be used in a multitude of two-body problems, where classical mechanics is applicable. In a system with two point masses m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} such that they are co-linear, the two distances r 1 {\displaystyle r_{1}} and r 2 {\displaystyle r_{2}} to the rotation axis may be found with r 1 = R m 2 m 1 + m 2 {\displaystyle r_{1}=R{\frac {m_{2}}{m_{1}+m_{2}}}} r 2 = R m 1 m 1 + m 2 {\displaystyle r_{2}=R{\frac {m_{1}}{m_{1}+m_{2}}}} where R {\displaystyle R} is the sum of both distances R = r 1 + r 2 {\displaystyle R=r_{1}+r_{2}} . This holds for a rotation around the center of mass. The moment of inertia around this axis can be then simplified to I = m 1 r 1 2 + m 2 r 2 2 = R 2 m 1 m 2 2 ( m 1 + m 2 ) 2 + R 2 m 1 2 m 2 ( m 1 + m 2 ) 2 = μ R 2 . {\displaystyle I=m_{1}r_{1}^{2}+m_{2}r_{2}^{2}=R^{2}{\frac {m_{1}m_{2}^{2}}{(m_{1}+m_{2})^{2}}}+R^{2}{\frac {m_{1}^{2}m_{2}}{(m_{1}+m_{2})^{2}}}=\mu R^{2}.} In a collision with a coefficient of restitution e , the change in kinetic energy can be written as Δ K = 1 2 μ v rel 2 ( e 2 − 1 ) , {\displaystyle \Delta K={\frac {1}{2}}\mu v_{\text{rel}}^{2}\left(e^{2}-1\right),} where v rel is the relative velocity of the bodies before collision . For typical applications in nuclear physics, where one particle's mass is much larger than the other the reduced mass can be approximated as the smaller mass of the system. The limit of the reduced mass formula as one mass goes to infinity is the smaller mass, thus this approximation is used to ease calculations, especially when the larger particle's exact mass is not known. In the case of the gravitational potential energy V ( | r 1 − r 2 | ) = − G m 1 m 2 | r 1 − r 2 | , {\displaystyle V(|\mathbf {r} _{1}-\mathbf {r} _{2}|)=-{\frac {Gm_{1}m_{2}}{|\mathbf {r} _{1}-\mathbf {r} _{2}|}}\,,} we find that the position of the first body with respect to the second is governed by the same differential equation as the position of a body with the reduced mass orbiting a body with a mass (M) equal to the one particular sum equal to the sum of these two masses , because m 1 m 2 = ( m 1 + m 2 ) μ ; {\displaystyle m_{1}m_{2}=\left(m_{1}+m_{2}\right)\mu ;} but all those other pairs whose sum is M would have the wrong product of their masses. Consider the electron (mass m e ) and proton (mass m p ) in the hydrogen atom . [ 3 ] They orbit each other about a common centre of mass, a two body problem. To analyze the motion of the electron, a one-body problem, the reduced mass replaces the electron mass m e → m e m p m e + m p {\displaystyle m_{\text{e}}\rightarrow {\frac {m_{\text{e}}m_{\text{p}}}{m_{\text{e}}+m_{\text{p}}}}} This idea is used to set up the Schrödinger equation for the hydrogen atom.
https://en.wikipedia.org/wiki/Reduced_mass
In model theory , a branch of mathematical logic , and in algebra , the reduced product is a construction that generalizes both direct product and ultraproduct . Let { S i | i ∈ I } be a nonempty family of structures of the same signature σ indexed by a set I , and let U be a proper filter on I . The domain of the reduced product is the quotient of the Cartesian product by a certain equivalence relation ~: two elements ( a i ) and ( b i ) of the Cartesian product are equivalent if If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the direct product. If U is an ultrafilter , the reduced product is an ultraproduct. Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by For example, if each structure is a vector space , then the reduced product is a vector space with addition defined as ( a + b ) i = a i + b i and multiplication by a scalar c as ( ca ) i = c a i . This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reduced_product
In thermodynamics , the reduced properties of a fluid are a set of state variables scaled by the fluid's state properties at its critical point . These dimensionless thermodynamic coordinates, taken together with a substance's compressibility factor , provide the basis for the simplest form of the theorem of corresponding states . [ 1 ] Reduced properties are also used to define the Peng–Robinson equation of state , a model designed to provide reasonable accuracy near the critical point. [ 2 ] They are also used to critical exponents , which describe the behaviour of physical quantities near continuous phase transitions. [ 3 ] The reduced pressure is defined as its actual pressure p {\displaystyle p} divided by its critical pressure p c {\displaystyle p_{\rm {c}}} : [ 1 ] The reduced temperature of a fluid is its actual temperature, divided by its critical temperature : [ 1 ] where the actual temperature and critical temperature are expressed in absolute temperature scales (either Kelvin or Rankine ). Both the reduced temperature and the reduced pressure are often used in thermodynamical formulas like the Peng–Robinson equation of state. The reduced specific volume (or "pseudo-reduced specific volume") of a fluid is computed from the ideal gas law at the substance's critical pressure and temperature: [ 1 ] This property is useful when the specific volume and either temperature or pressure are known, in which case the missing third property can be computed directly.
https://en.wikipedia.org/wiki/Reduced_properties
In chemistry , a reducing agent (also known as a reductant , reducer , or electron donor ) is a chemical species that "donates" an electron to an electron recipient (called the oxidizing agent , oxidant , oxidizer , or electron acceptor ). Examples of substances that are common reducing agents include hydrogen , carbon monoxide , the alkali metals , formic acid , [ 1 ] oxalic acid , [ 2 ] and sulfite compounds. In their pre-reaction states, reducers have extra electrons (that is, they are by themselves reduced) and oxidizers lack electrons (that is, they are by themselves oxidized). This is commonly expressed in terms of their oxidation states. An agent's oxidation state describes its degree of loss of electrons, where the higher the oxidation state then the fewer electrons it has. So initially, prior to the reaction, a reducing agent is typically in one of its lower possible oxidation states; its oxidation state increases during the reaction while that of the oxidizer decreases. Thus in a redox reaction, the agent whose oxidation state increases, that "loses/ donates electrons", that "is oxidized", and that "reduces" is called the reducer or reducing agent , while the agent whose oxidation state decreases, that "gains/ accepts /receives electrons", that "is reduced", and that "oxidizes" is called the oxidizer or oxidizing agent . For example, consider the overall reaction for aerobic cellular respiration : The oxygen ( O 2 ) is being reduced, so it is the oxidizing agent. The glucose ( C 6 H 12 O 6 ) is being oxidized, so it is the reducing agent. Consider the following reaction: The reducing agent in this reaction is ferrocyanide ( [Fe(CN) 6 ] 4− ). It donates an electron, becoming oxidized to ferricyanide ( [Fe(CN) 6 ] 3− ). Simultaneously, that electron is received by the oxidizer chlorine ( Cl 2 ), which is reduced to chloride ( Cl − ). Strong reducing agents easily lose (or donate) electrons. An atom with a relatively large atomic radius tends to be a better reductant. In such species, the distance from the nucleus to the valence electrons is so long that these electrons are not strongly attracted. These elements tend to be strong reducing agents. Good reducing agents tend to consist of atoms with a low electronegativity , which is the ability of an atom or molecule to attract bonding electrons, and species with relatively small ionization energies serve as good reducing agents too. [ citation needed ] The measure of a material's ability to reduce is known as its reduction potential . [ 3 ] The table below shows a few reduction potentials, which can be changed to oxidation potentials by reversing the sign. Reducing agents can be ranked by increasing strength by ranking their reduction potentials. Reducers donate electrons to (that is, "reduce") oxidizing agents , which are said to "be reduced by" the reducer. The reducing agent is stronger when it has a more negative reduction potential and weaker when it has a more positive reduction potential. The more positive the reduction potential the greater the species' affinity for electrons and tendency to be reduced (that is, to receive electrons). The following table provides the reduction potentials of the indicated reducing agent at 25 °C. For example, among sodium (Na), chromium (Cr), cuprous (Cu + ) and chloride (Cl − ), it is Na that is the strongest reducing agent while Cl − is the weakest; said differently, Na + is the weakest oxidizing agent in this list while Cl is the strongest. [ citation needed ] Common reducing agents include metals potassium, calcium, barium, sodium and magnesium, and also compounds that contain the hydride H − ion, those being NaH , LiH , [ 5 ] LiAlH 4 and CaH 2 . Some elements and compounds can be both reducing or oxidizing agents . Hydrogen gas is a reducing agent when it reacts with non-metals and an oxidizing agent when it reacts with metals. Hydrogen (whose reduction potential is 0.0) acts as an oxidizing agent because it accepts an electron donation from the reducing agent lithium (whose reduction potential is -3.04), which causes Li to be oxidized and hydrogen to be reduced. Hydrogen acts as a reducing agent because it donates its electrons to fluorine , which allows fluorine to be reduced. Reducing agents and oxidizing agents are the ones responsible for corrosion , which is the "degradation of metals as a result of electrochemical activity". [ 3 ] Corrosion requires an anode and cathode to take place. The anode is an element that loses electrons (reducing agent), thus oxidation always occurs in the anode, and the cathode is an element that gains electrons (oxidizing agent), thus reduction always occurs in the cathode. Corrosion occurs whenever there's a difference in oxidation potential. When this is present, the anode metal begins deteriorating, given there is an electrical connection and the presence of an electrolyte . [ citation needed ] Historically, reduction referred to the removal of oxygen from a compound, hence the name 'reduction'. [ 7 ] An example of this phenomenon occurred during the Great Oxidation Event , in which biologically−produced molecular oxygen ( dioxygen ( O 2 ), an oxidizer and electron recipient) was added to the early Earth's atmosphere , which was originally a weakly reducing atmosphere containing reducing gases like methane ( CH 4 ) and carbon monoxide ( CO ) (along with other electron donors) [ 8 ] and practically no oxygen because any that was produced would react with these or other reducers (particularly with iron dissolved in sea water ), resulting in their removal . By using water as a reducing agent, aquatic photosynthesizing cyanobacteria produced this molecular oxygen as a waste product. [ 9 ] This O 2 initially oxidized the ocean's dissolved ferrous iron (Fe(II) − meaning iron in its +2 oxidation state) to form insoluble ferric iron oxides such as Iron(III) oxide (Fe(II) lost an electron to the oxidizer and became Fe(III) − meaning iron in its +3 oxidation state) that precipitated down to the ocean floor to form banded iron formations , thereby removing the oxygen (and the iron). The rate of production of oxygen eventually exceeded the availability of reducing materials that removed oxygen, which ultimately led Earth to gain a strongly oxidizing atmosphere containing abundant oxygen (like the modern atmosphere ). [ 10 ] The modern sense of donating electrons is a generalization of this idea, acknowledging that other components can play a similar chemical role to oxygen. The formation of iron(III) oxide ; In the above equation, the Iron (Fe) has an oxidation number of 0 before and 3+ after the reaction. For oxygen (O) the oxidation number began as 0 and decreased to 2−. These changes can be viewed as two " half-reactions " that occur concurrently: Iron (Fe) has been oxidized because the oxidation number increased. Iron is the reducing agent because it gave electrons to the oxygen (O 2 ). Oxygen (O 2 ) has been reduced because the oxidation number has decreased and is the oxidizing agent because it took electrons from iron (Fe).
https://en.wikipedia.org/wiki/Reducing_agent
A reducing atmosphere is an atmosphere in which oxidation is prevented by the absence of oxygen and other oxidizing gases or vapours, and which may contain actively reductant gases such as hydrogen , carbon monoxide , methane and hydrogen sulfide that would be readily oxidized to remove any free oxygen. Although Early Earth had a reducing prebiotic atmosphere prior to the Proterozoic eon , starting at about 2.5 billion years ago in the late Neoarchaean period , the Earth's atmosphere experienced a significant rise in oxygen and transitioned to an oxidizing atmosphere with a surplus of molecular oxygen ( dioxygen , O 2 ) as the primary oxidizing agent . The principal mission of an iron foundry is the conversion of iron oxides (purified iron ores) to iron metal. This reduction is usually effected using a reducing atmosphere consisting of some mixture of natural gas , hydrogen (H 2 ), and carbon monoxide . The byproduct is carbon dioxide . [ 1 ] In metal processing, a reducing atmosphere is used in annealing ovens for relaxation of metal stresses without corroding the metal. A non-oxidizing gas, usually nitrogen or argon , is typically used as a carrier gas so that diluted amounts of reducing gases may be used. Typically, this is achieved through using the combustion products of fuels and tailoring the ratio of CO:CO 2 . However, other common reducing atmospheres in the metal processing industries consist of dissociated ammonia, vacuum, and direct mixing of appropriately pure gases of N 2 , Ar, and H 2 . [ 2 ] A reducing atmosphere is also used to produce specific effects on ceramic wares being fired. A reduction atmosphere is produced in a fuel fired kiln by reducing the draft and depriving the kiln of oxygen. This diminished level of oxygen causes incomplete combustion of the fuel and raises the level of carbon inside the kiln. At high temperatures the carbon will bond with and remove the oxygen in the metal oxides used as colorants in the glazes. This loss of oxygen results in a change in the color of the glazes because it allows the metals in the glaze to be seen in an unoxidized form. A reduction atmosphere can also affect the color of the clay body. If iron is present in the clay body, as it is in most stoneware , then it will be affected by the reduction atmosphere as well. In most commercial incinerators , exactly the same conditions are created to encourage the release of carbon-bearing fumes. These fumes are then oxidized in reburn tunnels where oxygen is injected progressively. The exothermic oxidation reaction maintains the temperature of the reburn tunnels. This system allows lower temperatures to be employed in the incinerator section, where the solids are volumetrically reduced. The atmosphere of Early Earth is widely speculated to have been reducing. The Miller–Urey experiment , related to some hypotheses for the origin of life, entailed reactions in a reducing atmosphere composed of a mixed atmosphere of methane , ammonia and hydrogen sulfide . [ 3 ] [ 4 ] Some hypotheses for the origin of life invoke a reducing atmosphere consisting of hydrogen cyanide (HCN). Experiments show that HCN can polymerize in the presence of ammonia to give various products including amino acids . [ 5 ] The same principle applies to Mars , Venus and Titan . Cyanobacteria are suspected to be the first photoautotrophs that evolved oxygenic photosynthesis , which over the latter half of the Archaen eon eventually depleted all reductants in the Earth's oceans, terrestrial surface and atmosphere, gradually increasing the oxygen concentration in the atmosphere, changing it to what is known as an oxidizing atmosphere. This rising oxygen initially led to a 300 million-year-long ice age that devastated the then-mostly anaerobe -dominated biosphere , forcing the surviving anaerobic colonies to evolve into symbiotic microbial mats with the newly evolved aerobes . Some aerobic bacteria eventually became endosymbiont within other anaerobes (likely archaea ), and the resultant symbiogenesis led to the evolution of an entirely new lineage of life — the eukaryotes , who took advantage of mitochondrial aerobic respiration to power their cellular activities, allowing life to thrive and evolve into ever more complex forms. [ 6 ] The increased oxygen in the atmosphere also eventually created the ozone layer , which shielded away harmful ionizing ultraviolet radiation that otherwise would have photodissociated away surface water and rendered life impossible on land and the ocean surface. In contrast to the hypothesized early reducing atmosphere, evidence exists that Hadean atmospheric oxygen levels were similar to today's. [ 7 ] These results suggest prebiotic building blocks were delivered from elsewhere in the galaxy. The results, however, do not contradict existing theories on life's journey from anaerobic to aerobic organisms. The results quantify the nature of gas molecules containing carbon, hydrogen, and sulphur in the earliest atmosphere, but they shed no light on the much later rise of free oxygen in the air. [ 8 ]
https://en.wikipedia.org/wiki/Reducing_atmosphere
In linear algebra , a reducing subspace W {\displaystyle W} of a linear map T : V → V {\displaystyle T:V\to V} from a Hilbert space V {\displaystyle V} to itself is an invariant subspace of T {\displaystyle T} whose orthogonal complement W ⊥ {\displaystyle W^{\perp }} is also an invariant subspace of T . {\displaystyle T.} That is, T ( W ) ⊆ W {\displaystyle T(W)\subseteq W} and T ( W ⊥ ) ⊆ W ⊥ . {\displaystyle T(W^{\perp })\subseteq W^{\perp }.} One says that the subspace W {\displaystyle W} reduces the map T . {\displaystyle T.} One says that a linear map is reducible if it has a nontrivial reducing subspace. Otherwise one says it is irreducible . If V {\displaystyle V} is of finite dimension r {\displaystyle r} and W {\displaystyle W} is a reducing subspace of the map T : V → V {\displaystyle T:V\to V} represented under basis B {\displaystyle B} by matrix M ∈ R r × r {\displaystyle M\in \mathbb {R} ^{r\times r}} then M {\displaystyle M} can be expressed as the sum M = P W M P W + P W ⊥ M P W ⊥ {\displaystyle M=P_{W}MP_{W}+P_{W^{\perp }}MP_{W^{\perp }}} where P W ∈ R r × r {\displaystyle P_{W}\in \mathbb {R} ^{r\times r}} is the matrix of the orthogonal projection from V {\displaystyle V} to W {\displaystyle W} and P W ⊥ = I − P W {\displaystyle P_{W^{\perp }}=I-P_{W}} is the matrix of the projection onto W ⊥ . {\displaystyle W^{\perp }.} [ 1 ] (Here I ∈ R r × r {\displaystyle I\in \mathbb {R} ^{r\times r}} is the identity matrix .) Furthermore, V {\displaystyle V} has an orthonormal basis B ′ {\displaystyle B'} with a subset that is an orthonormal basis of W {\displaystyle W} . If Q ∈ R r × r {\displaystyle Q\in \mathbb {R} ^{r\times r}} is the transition matrix from B {\displaystyle B} to B ′ {\displaystyle B'} then with respect to B ′ {\displaystyle B'} the matrix Q − 1 M Q {\displaystyle Q^{-1}MQ} representing T {\displaystyle T} is a block-diagonal matrix Q − 1 M Q = [ A 0 0 B ] {\displaystyle Q^{-1}MQ=\left[{\begin{array}{cc}A&0\\0&B\end{array}}\right]} with A ∈ R d × d , {\displaystyle A\in \mathbb {R} ^{d\times d},} where d = dim ⁡ W {\displaystyle d=\dim W} , and B ∈ R ( r − d ) × ( r − d ) . {\displaystyle B\in \mathbb {R} ^{(r-d)\times (r-d)}.} This article about matrices is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reducing_subspace
A reducing sugar is any sugar that is capable of acting as a reducing agent . [ 1 ] In an alkaline solution, a reducing sugar forms some aldehyde or ketone , which allows it to act as a reducing agent, for example in Benedict's reagent . In such a reaction, the sugar becomes a carboxylic acid . All monosaccharides are reducing sugars, along with some disaccharides , some oligosaccharides , and some polysaccharides . The monosaccharides can be divided into two groups: the aldoses , which have an aldehyde group, and the ketoses , which have a ketone group. Ketoses must first tautomerize to aldoses before they can act as reducing sugars. The common dietary monosaccharides galactose , glucose and fructose are all reducing sugars. Disaccharides are formed from two monosaccharides and can be classified as either reducing or nonreducing. Nonreducing disaccharides like sucrose and trehalose have glycosidic bonds between their anomeric carbons and thus cannot convert to an open-chain form with an aldehyde group; they are stuck in the cyclic form. Reducing disaccharides like lactose and maltose have only one of their two anomeric carbons involved in the glycosidic bond, while the other is free and can convert to an open-chain form with an aldehyde group. The aldehyde functional group allows the sugar to act as a reducing agent, for example, in the Tollens' test or Benedict's test . The cyclic hemiacetal forms of aldoses can open to reveal an aldehyde, and certain ketoses can undergo tautomerization to become aldoses. However, acetals , including those found in polysaccharide linkages, cannot easily become free aldehydes. Reducing sugars react with amino acids in the Maillard reaction , a series of reactions that occurs while cooking food at high temperatures and that is important in determining the flavor of food. Also, the levels of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products. A reducing sugar is one that reduces another compound and is itself oxidized ; that is, the carbonyl carbon of the sugar is oxidized to a carboxyl group. [ 2 ] A sugar is classified as a reducing sugar only if it has an open-chain form with an aldehyde group or a free hemiacetal group. [ 3 ] Monosaccharides which contain an aldehyde group are known as aldoses , and those with a ketone group are known as ketoses . The aldehyde can be oxidized via a redox reaction in which another compound is reduced. Thus, aldoses are reducing sugars. Sugars with ketone groups in their open chain form are capable of isomerizing via a series of tautomeric shifts to produce an aldehyde group in solution. Therefore, ketones like fructose are considered reducing sugars but it is the isomer containing an aldehyde group which is reducing since ketones cannot be oxidized without decomposition of the sugar. This type of isomerization is catalyzed by the base present in solutions which test for the presence of reducing sugars. [ 3 ] Disaccharides consist of two monosaccharides and may be either reducing or nonreducing. Even a reducing disaccharide will only have one reducing end, as disaccharides are held together by glycosidic bonds , which consist of at least one anomeric carbon . With one anomeric carbon unable to convert to the open-chain form, only the free anomeric carbon is available to reduce another compound, and it is called the reducing end of the disaccharide. A nonreducing disaccharide is that which has both anomeric carbons tied up in the glycosidic bond. [ 4 ] Similarly, most polysaccharides have only one reducing end. All monosaccharides are reducing sugars because they either have an aldehyde group (if they are aldoses) or can tautomerize in solution to form an aldehyde group (if they are ketoses). [ 5 ] This includes common monosaccharides like galactose , glucose , glyceraldehyde , fructose , ribose , and xylose . Many disaccharides , like cellobiose , lactose , and maltose , also have a reducing form, as one of the two units may have an open-chain form with an aldehyde group. [ 6 ] However, sucrose and trehalose , in which the anomeric carbon atoms of the two units are linked together, are nonreducing disaccharides since neither of the rings is capable of opening. [ 5 ] In glucose polymers such as starch and starch- derivatives like glucose syrup , maltodextrin and dextrin the macromolecule begins with a reducing sugar, a free aldehyde. When starch has been partially hydrolyzed the chains have been split and hence it contains more reducing sugars per gram. The percentage of reducing sugars present in these starch derivatives is called dextrose equivalent (DE). Glycogen is a highly branched polymer of glucose that serves as the main form of carbohydrate storage in animals. It is a reducing sugar with only one reducing end, no matter how large the glycogen molecule is or how many branches it has (note, however, that the unique reducing end is usually covalently linked to glycogenin and will therefore not be reducing). Each branch ends in a nonreducing sugar residue. When glycogen is broken down to be used as an energy source, glucose units are removed one at a time from the nonreducing ends by enzymes. [ 2 ] Several qualitative tests are used to detect the presence of reducing sugars. Two of them use solutions of copper(II) ions: Benedict's reagent (Cu 2+ in aqueous sodium citrate) and Fehling's solution (Cu 2+ in aqueous sodium tartrate). [ 7 ] The reducing sugar reduces the copper(II) ions in these test solutions to copper(I), which then forms a brick red copper(I) oxide precipitate. Reducing sugars can also be detected with the addition of Tollen's reagent , which consist of silver ions (Ag + ) in aqueous ammonia. [ 7 ] When Tollen's reagent is added to an aldehyde, it precipitates silver metal, often forming a silver mirror on clean glassware. [ 3 ] 3,5-dinitrosalicylic acid is another test reagent, one that allows quantitative detection. It reacts with a reducing sugar to form 3-amino-5-nitrosalicylic acid , which can be measured by spectrophotometry to determine the amount of reducing sugar that was present. [ 8 ] Some sugars, such as sucrose, do not react with any of the reducing-sugar test solutions. However, a non-reducing sugar can be hydrolyzed using dilute hydrochloric acid . After hydrolysis and neutralization of the acid, the product may be a reducing sugar that gives normal reactions with the test solutions. All carbohydrates are converted to aldehydes and respond positively in Molisch's test . But the test has a faster rate when it comes to monosaccharides. Fehling's solution was used for many years as a diagnostic test for diabetes , a disease in which blood glucose levels are dangerously elevated by a failure to produce enough insulin ( type 1 diabetes ) or by an inability to respond to insulin ( type 2 diabetes ). Measuring the amount of oxidizing agent (in this case, Fehling's solution) reduced by glucose makes it possible to determine the concentration of glucose in the blood or urine. This then enables the right amount of insulin to be injected to bring blood glucose levels back into the normal range. [ 2 ] The carbonyl groups of reducing sugars react with the amino groups of amino acids in the Maillard reaction , a complex series of reactions that occurs when cooking food. [ 9 ] Maillard reaction products (MRPs) are diverse; some are beneficial to human health, while others are toxic. However, the overall effect of the Maillard reaction is to decrease the nutritional value of food. [ 10 ] One example of a toxic product of the Maillard reaction is acrylamide , a neurotoxin and possible carcinogen that is formed from free asparagine and reducing sugars when cooking starchy foods at high temperatures (above 120 °C). [ 11 ] However, evidence from epidemiological studies suggest that dietary acrylamide is unlikely to raise the risk of people developing cancer. [ 12 ] The level of reducing sugars in wine, juice, and sugarcane are indicative of the quality of these food products, and monitoring the levels of reducing sugars during food production has improved market quality. The conventional method for doing so is the Lane-Eynon method, which involves titrating the reducing sugar with copper(II) in Fehling's solution in the presence of methylene blue , a common redox indicator . However, it is inaccurate, expensive, and sensitive to impurities. [ 13 ]
https://en.wikipedia.org/wiki/Reducing_sugar
In universal algebra and in model theory , a reduct of an algebraic structure is obtained by omitting some of the operations and relations of that structure. The opposite of "reduct" is "expansion". Let A be an algebraic structure (in the sense of universal algebra ) or a structure in the sense of model theory , organized as a set X together with an indexed family of operations and relations φ i on that set, with index set I . Then the reduct of A defined by a subset J of I is the structure consisting of the set X and J -indexed family of operations and relations whose j -th operation or relation for j ∈ J is the j -th operation or relation of A . That is, this reduct is the structure A with the omission of those operations and relations φ i for which i is not in J . A structure A is an expansion of B just when B is a reduct of A . That is, reduct and expansion are mutual converses. The monoid ( Z , +, 0) of integers under addition is a reduct of the group ( Z , +, −, 0) of integers under addition and negation , obtained by omitting negation. By contrast, the monoid ( N , +, 0) of natural numbers under addition is not the reduct of any group. Conversely the group ( Z , +, −, 0) is the expansion of the monoid ( Z , +, 0), expanding it with the operation of negation.
https://en.wikipedia.org/wiki/Reduct
In mathematics , reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called " reducing a fraction ". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals . In linear algebra , reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction , respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form " or "row-echelon form"; this is the goal of Gaussian elimination . In calculus , reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: where K and F are known and K , x and F are divided into submatrices as shown above. If F 2 contains only zeros, and only x 1 is desired, K can be reduced to yield the following system of equations K 11 , reduced {\displaystyle K_{11,{\text{reduced}}}} is obtained by writing out the set of equations as follows: Equation ( 2 ) can be solved for x 2 {\displaystyle x_{2}} (assuming invertibility of K 22 {\displaystyle K_{22}} ): And substituting into ( 1 ) gives Thus In a similar fashion, any row or column i of F with a zero value may be eliminated if the corresponding value of x i is not desired. A reduced K may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost O ( n 3 ) , most large matrices are pre-processed to reduce calculation time. In the 9th century, Persian mathematician Al-Khwarizmi 's Al-Jabr introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr . [ 1 ] The name " algebra " comes from the " al-jabr " in the title of his book.
https://en.wikipedia.org/wiki/Reduction_(mathematics)
Reduction of summands is an algorithm for fast binary multiplication of non-signed binary integers. It is performed in three steps: production of summands, reduction of summands, and summation. In binary multiplication, each row of the summands will be either zero or one of the numbers to be multiplied. Consider the following: The second and fourth row of the summands are equivalent to the first term. Production of the summands requires a simple AND gate for each summand. Given enough AND gates, the time to produce the summands will be one cycle of the arithmetic logic unit . The summands are reduced using a common 1-bit full adder that accepts two 1-bit terms and a carry-in bit. It produces a sum and a carry-out. The full adders are arranged such that the sum remains in the same column of summands, but the carry-out is shifted left. In each round of reduction, three bits in a single column are used as the two terms and carry-in for the full adder, producing a single sum bit for the column. This reduces the bits in the column by a factor of 3. However, the column to the right will shift over carry-out bits, increasing the bits in the column by a third of the number of rows of summands. At worst, the reduction will be 2/3 the number of rows per round of reduction. The following shows how the first round of reduction is performed. Note that all "empty" positions of the summands are considered to be zero (a . is used here as indicator of the "assumed zero values"). In each row, the top three bits are the three inputs to the full adder (two terms and carry-in). The sum is placed in the top bit of the column. The carry-out is placed in the second row of the column to the left. The bottom bit is a single feed into an adder. The sum of this adder is placed in the third row of the column. Carry-out is ignored as it will always be zero, but by design it would be placed in the fourth row of the column to the left. For design, it is important to note that rows 1, 3, 5, ... (counting from the top) are filled with sums from the column itself. Rows 2, 4, 6, ... are filled with carry-out values from the column to the right. Reduction is performed again in exactly the same way. This time, only the top three rows of summands are of interest because all other summands must be zero. When there are only two significant rows of summands, the reduction cycles end. A basic full adder normally requires three cycles of the arithmetic logic unit . Therefore, each cycle of reduction is commonly 3 cycles long. When there are only two rows of summands remaining, they are added using a fast adder. There are many designs of fast adders, any of which may be used to complete this algorithm. The calculation time for the reduction of summands algorithm is: T = 1Δt + r3Δt + FA (where r is the number of reduction cycles and FA is the time for the fast adder at the end of the algorithm).
https://en.wikipedia.org/wiki/Reduction_of_summands
Reductions with diimide are a chemical reactions that convert unsaturated organic compounds to reduced alkane products. In the process, diimide ( N 2 H 2 ) is oxidized to dinitrogen. [ 1 ] In 1929, the conversion of oleic acid to stearic acid in the presence of hydrazine was observed. [ 2 ] The short-lived intermediate diimide was not implicated in this reductive process until the 1960s. Since that time, several methods of generating transient amounts of diimide have been developed. [ 3 ] [ 4 ] In the presence of unpolarized alkenes, alkynes or allenes, diimide is converted into dinitrogen with reduction (net addition of dihydrogen) of the unsaturated functionality. Diimide formation is the rate-limiting step of the process, and a concerted mechanism involving cis -diimide has been proposed. [ 5 ] This reduction represents a metal-free alternative to catalytic hydrogenation reductions, and does not lead to the cleavage of sensitive O–O and N–O bonds. (1) Diimide reductions result in the syn addition of dihydrogen to alkenes and alkynes. This observation has led to the proposal that the mechanism involves concerted hydrogen transfer from cis -diimide to the substrate. The cis isomer is the less stable of the two; however, acid catalysis may speed up equilibration of the trans and cis isomers. [ 5 ] (2) Diimide is typically generated either through the oxidation of hydrazine or the decarboxylation of potassium azodicarboxylate . Kinetic experiments suggest that regardless of its method of generation, the formation of diimide is rate-limiting. The transition state of the hydrogen transfer step is likely early; however, high stereoselectivity has been obtained in many reductions of chiral alkenes. [ 6 ] (3) The order of reactivity of unsaturated substrates is: alkynes, allenes > terminal or strained alkenes > substituted alkenes. Trans alkenes react more rapidly than cis alkenes in general. The reactivity difference between alkynes and alkenes is usually not great enough to isolate intermediate alkenes; however, alkenes can be isolated from allene reductions. Diimide reduces symmetrical double bonds i.e., C=C. N=N, O=O etc. unsymmetrical double bonds can not be reduced Diimide is most effective at reducing unpolarized carbon-carbon double or triple bonds. In reactions with other unsaturated systems, disproportionation of diimide to nitrogen gas and hydrazine is a competing process that significantly degrades the reducing agent. Many groups that are ordinarily sensitive to reductive conditions, including peroxides, are not affected by the conditions of diimide reductions. [ 7 ] (4) Diimide will selectively reduce less substituted double bonds under some conditions. Discrimination between terminal and disubstituted double bonds is often low, however. (5) Allenes are reduced to the more highly substituted alkene in the presence of diimide, although yields are low. [ 8 ] (6) Iodoalkynes represent an exception to the rule that alkenes cannot be obtained from alkynes. After diimide reduction of iodoalkynes, cis -iodoalkenes may be isolated in good yield. [ 9 ] (7) Recently, diimide has been generated catalytically through the oxidation of hydrazine by a flavin-based organocatalyst. This system selectively reduces terminal double bonds. [ 10 ] (8) In general, diimide does not efficiently reduce polarized double bonds; however, a limited number of examples do exist in the literature. Aromatic aldehydes are reduced by diimide generated through the decarboxylation of potassium azodicarboxylate . [ 11 ] Reductions of carbon-carbon double and triple bonds are most commonly accomplished through catalytic hydrogenation: [ 12 ] (9) However, diimide reduction offers the advantages that the handling of gaseous hydrogen is unnecessary and removal of catalysts and byproducts (one of which is gaseous dinitrogen) is straightforward. Hydrogenolysis side reactions do not occur during diimide reductions, and N–O and O–O bonds are not affected by the reaction conditions. On the other hand, diimide reductions often require long reaction times, and reductions of highly substituted or polarized double bonds are sluggish. In addition, an excess of the reagent used to generate diimide (e.g. dipotassium azodicarboxylate) is required for hydrogenation because of the two competing processes of disproportionation (to N 2 H 4 and N 2 ) and decomposition (to N 2 and H 2 ) that the liberated diimide can also undergo. [ 13 ] [ 14 ] Unfortunately, this means that in the case of alkyne reduction, over-reduction to the alkane can occur resulting in diminished yields where the cis alkene is the desired product. [ 14 ] A variety of methods for the generation of diimide exist. The most synthetically useful methods are: Procedures (particularly those employing air as an oxidant) are typically straightforward and do not require special handling techniques. 16.A. Gangadhar, T. Chandrasekhara Rao, R. Subbarao, G. Lakshminarayana, Journal of the American Oil Chemists' Society October 1989, Volume 66, Issue 10, pp 1507–1508 17. A. Gangadhar, R. Subbarao, G. Lakshminarayana, Journal of the American Oil Chemists' Society July 1984, Volume 61, Issue 7, pp 1239–1241
https://en.wikipedia.org/wiki/Reductions_with_diimide
Reductions with hydrosilanes are methods used for hydrogenation and hydrogenolysis of organic compounds . The approach is a subset of ionic hydrogenation . In this particular method, the substrate is treated with a hydrosilane and auxiliary reagent, often a strong acid, resulting in formal transfer of hydride from silicon to carbon. [ 1 ] This style of reduction with hydrosilanes enjoys diverse if specialized applications. Some alcohols are reduced to alkanes when treated with hydrosilanes in the presence of a strong Lewis acid. Brønsted acids may also be used. Tertiary alcohols undergo facile reduction using boron trifluoride etherate as the Lewis acid. [ 2 ] Primary alcohols require an excess of the silane, a stronger Lewis acid, and long reaction times. [ 3 ] Skeletal rearrangements are sometimes induced. [ 4 ] Another side reaction is nucleophilic attack of the conjugate base on the intermediate carbocation. [ 5 ] In organosilane reductions of substrates bearing prostereogenic groups, diastereoselectivity is often high. Reduction of either diastereomer of 2-phenyl-2-norbornanol leads exclusively to the endo diastereomer of 2-phenylnorbornane. [ 6 ] None of the exo diastereomer was observed. Allylic alcohols may be deoxygenated in the presence of tertiary alcohols when ethereal lithium perchlorate is employed as a source of Li + . [ 7 ] Reductions of alkyl halides and triflates gives poorer yields in general than reductions of alcohols. A Lewis or Bronsted acid is required. [ 8 ] Polymeric hydrosilanes, such as polymethylhydrosiloxane (PHMS), may be employed to facilitate separation of the reduced products from silicon-containing byproducts. [ 9 ] [ 10 ] Enantioselective reductions of ketones may be accomplished through the use of catalytic amounts of chiral transition metal complexes. [ 11 ] In some cases, the transition metal simply serves as a Lewis acid that coordinates to the ketone oxygen; however, some metals (most notably copper) react with hydrosilanes to afford metal hydride intermediates, which act as the active reducing agent. [ 12 ] In the presence of rhodium catalyst 1 and rhodium trichloride, 2-phenylcyclohexanone is reduced with no diastereoselectivity but high enantioselectivity. [ 13 ] Esters may be reduced to alcohols under conditions of nucleophilic activation with caesium or potassium fluoride. [ 14 ] Aldehydes undergo hydrosilylation in the presence of hydrosilanes and fluoride. The resulting silyl ethers can be hydrolyzed with 1 M hydrochloric acid . Optimal yields of the hydrosilylation are obtained when the reaction is carried out in very polar solvents. [ 10 ] Hydrosilanes can reduce 1,1-disubstituted double bonds that form stable tertiary carbocations upon protonation. Trisubstituted double bonds may be reduced selectively in the presence of 1,2-disubstituted or monosubstituted alkenes. [ 15 ] Aromatic compounds may be reduced with TFA and triethylsilane. Substituted furans are reduced to tetrahydrofuran derivatives in high yield. [ 16 ] A synthesis of (+)-estrone relies on selective hydrosilane reduction of a conjugated alkene as a key step. The ketone carbonyl and isolated double bond are unaffected under the conditions shown. [ 17 ] Acetals, ketals, and aminals are reduced in the presence of hydrosilanes and acid. Site-selective reduction of acetals and ketals whose oxygens are inequivalent have been reported—the example below is used in a synthesis of Tamiflu . [ 18 ] Other functional groups that have been reduced with hydrosilanes include amides, [ 19 ] and α,β-unsaturated esters [ 20 ] enamines, [ 21 ] imines, [ 22 ] and azides. [ 23 ] Trifluoroacetic acid , often used in these reductions, is a strong, corrosive acid. Some hydrosilanes are pyrophoric .
https://en.wikipedia.org/wiki/Reductions_with_hydrosilanes
Reductions with metal alkoxyaluminium hydrides are chemical reactions that involve either the net hydrogenation of an unsaturated compound or the replacement of a reducible functional group with hydrogen by metal alkoxyaluminium hydride reagents. [ 1 ] [ 2 ] Sodium borohydride and lithium aluminium hydride are commonly used for the reduction of organic compounds. [ 3 ] [ 4 ] These two reagents are on the extremes of reactivity—whereas lithium aluminium hydride reacts with nearly all reducible functional groups, sodium borohydride reacts with a much more limited range of functional groups . Diminished or enhanced reactivity may be realized by the replacement of one or more of the hydrogens in these reagents with alkoxy groups. Additionally, substitution of hydrogen for chiral alkoxy groups in these reagents enables asymmetric reductions. [ 5 ] Although methods involving stoichiometric amounts of chiral metal hydrides have been supplanted in modern times by enantioselective catalytic reductions, they are of historical interest as early examples of stereoselective reactions. The table below summarizes the reductions that may be carried out with a variety of metal aluminium hydrides and borohydrides. The symbol "+" indicates that reduction does occur, "-" indicates that reduction does not occur, "±" indicates that reduction depends on the structure of the substrate, and "0" indicates a lack of literature information. Reduction by alkoxyaluminium hydrides is thought in most cases to proceed by a polar mechanism. [ 6 ] Hydride transfer to the organic substrate generates an organic anion, which is neutralized either by protic solvent or upon acidic workup. Reductions of α,β-unsaturated carbonyl compounds may occur in a 1,2 sense (direct addition) or a 1,4 sense (conjugate addition). The tendency to add in a 1,4 sense is correlated with the softness of the hydride reagent according to Pearson's hard-soft acid-base theory. [ 7 ] Experimental results agree with the theory—softer hydride reagents afford higher yields of the conjugate reduction product. [ 8 ] A few substrates, including diaryl ketones, [ 9 ] diarylalkenes, [ 10 ] and anthracene , [ 11 ] are known to undergo reduction by single-electron transfer pathways with lithium aluminium hydride. Metal alkoxylaluminium hydride reagents are well characterized in a limited number of cases. [ 12 ] Precise characterization is complicated in some cases by disproportionation, which converts alkyoxyaluminium hydrides into alkoxyaluminates and metal aluminium hydride: [ 13 ] The origin of diastereoselectivity in reductions of chiral ketones has been extensively analyzed and modeled. [ 14 ] [ 15 ] According to a model advanced by Felkin, [ 16 ] diastereoselectivity is controlled by the relative energy of the three transition states I , II , and III . Transition state I is favored in the absence of polar groups on the α carbon, and stereoselectivity increases as the size of the achiral ketone substituent (R) increases. Transition state III is favored for reductions of alkyl ketones in which R M is an electron-withdrawing group, because the nucleophile and electron-withdrawing substituent prefer to be as far away from one another as possible. Diastereoselectivity in reductions of cyclic ketones has also been studied. Conformationally flexible ketones undergo axial attack by the hydride reagent, leading to the equatorial alcohol. Rigid cyclic ketones, on the other hand, undergo primarily equatorial attack to afford the axial alcohol. Preferential equatorial attack on rigid ketones has been rationalized by invoking "steric approach control"—an equatorial approach of the hydride reagent is less sterically hindered than an axial approach. [ 17 ] The preference for axial attack on conformationally flexible cyclic ketones has been addressed by a model put forth by Felkin and Anh. [ 18 ] [ 19 ] The transition state for axial attack ( IV ) suffers from steric strain between any axial substituents and the incoming hydride reagent. The transition state for equatorial attack ( V ) suffers from torsional strain between the incoming hydride reagent and adjacent equatorial hydrogens. The difference between these two strain energies determines which direction of attack is favored, and when R is small, torsional strain in V dominates and the equatorial alcohol product is favored. Alkoxyaluminium and closely related hydride reagents reduce a wide variety of functional groups , often with good selectivity. This section, organized by functional group, covers the most common or synthetically useful methods for alkoxyaluminium hydride reduction of organic compounds . Many selective reductions of carbonyl compounds can be effected by taking advantage of the unique reactivity profiles of metal alkoxylaluminium hydrides. For instance, lithium tri- tert -butoxy)aluminium hydride (LTBA) reduces aldehydes and ketones selectively in the presence of esters, with which it reacts extremely slowly. [ 20 ] α,β-Unsaturated ketones may be reduced selectively in a 1,2 or 1,4 sense by a judicious choice of reducing agent. Use of relatively unhindered lithium trimethoxyaluminium hydride results in nearly quantitative direct addition to the carbonyl group (Eq. ( 9 )). [ 21 ] On the other hand, use of the bulky reagent LTBA leads to a high yield of the conjugate addition product (Eq. ( 10 )). [ 22 ] Ether cleavage is difficult to accomplish with most hydride reagents. However, debenzylation of benzyl aryl ethers may be accomplished with SMEAH. [ 23 ] This protocol is a useful alternative to methods requiring acid or hydrogenolysis (e.g., Pd/C and hydrogen gas). Epoxides are generally attacked by alkoxyaluminium hydrides at the less substituted position. A nearby hydroxyl group may facilitate intramolecular delivery of the hydride reagent, allowing for selective opening of 1,2-disubstituted epoxides at the position closer to the hydroxyl group. [ 24 ] The configuration at the untouched epoxide carbon is preserved. Unsaturated carbonyl compounds may be reduced either to saturated or unsaturated alcohols by alkoxyaluminium hydride reagents. Addition of an unsaturated aldehyde to a solution of Red-Al afforded the saturated alcohol; inverse addition yielded the unsaturated alcohol product. [ 25 ] Alkenes undergo hydroalumination in the presence of some alkoxyaluminium hydrides. [ 26 ] In a related application, NaAlH 2 (OCH 2 CH 2 OCH 3 ) 2 (sodium bis(methoxyethoxy) aluminium dihydride, SMEAH or Red-Al) reacts with zirconocene dichloride to afford zirconocene chloride hydride (Schwartz's reagent). Alkenes undergo hydrozirconation in the presence of this reagent, affording functionalized products after quenching with an electrophile. [ 27 ] Functional groups containing heteroatoms other than oxygen may also be reduced to the corresponding hydrocarbons in the presence of an alkoxyaluminium hydride reagent. Primary alkyl halides undergo reduction to the corresponding alkanes in the presence of NaAlH(OH)(OCH 2 CH 2 OCH 3 ) 2 . Secondary halides are less reactive, but afford alkanes in reasonable yield. [ 28 ] Sulfoxides are reduced to the corresponding sulfides in good yield in the presence of SMEAH. [ 29 ] Imines are reduced by metal alkoxyaluminium hydrides to the corresponding amines. In the example below, use of the exo amine forms with high diastereoselectivity. The selectivity of hydride reduction in this case is higher than that of catalytic hydrogenation. [ 30 ] Alkoxyaluminium hydrides are typically prepared by treatment of lithium aluminium hydride with the corresponding alcohol. [ 31 ] Hydrogen evolution indicates the formation of alkoxyaluminium hydride products. Hindered hydrides such as lithium tri-( tert -butoxy)aluminium hydride (LTBA) are stable for long periods of time under inert atmosphere, but lithium trimethoxyaluminium hydride (LTMA) undergoes disproportionation and should be used immediately after preparation. Pure, solid Red-Al is stable for several hours under inert atmosphere and is available commercially as a 70%-solution in toluene under the trade name Vitride or Synhydrid. Reduction may typically be carried out in a round-bottom flask equipped with a drying-tube-capped reflux condenser, a mercury-sealed mechanical stirrer, a thermometer, a nitrogen inlet, and an additional funnel with a pressure-equalizing side arm. The most common solvents used are tetrahydrofuran and diethyl ether . Whatever solvent is used should be anhydrous and pure. Alkoxyaluminium hydrides should be kept as dry as possible and represent a significant fire hazard, particularly when an excess of hydride is used (hydrogen evolves during workup). To a solution of 1,3-dihydro-1,3-bis(chloromethyl)benzo[c] thiophene 2,2-dioxide (0.584 g, 2.2 mmol) in 50 ml of dry benzene was added 0.80 mL (2.8 mmol) of a 70% benzene solution of NaAlH 2 (OCH 2 CH 2 OCH 2 ) 2 via syringe, and the solution was refluxed for 12 hours. The mixture was cooled to 0° and decomposed with 20% sulfuric acid . The benzene layer was separated, washed with 10 mL of water, dried over potassium carbonate , and concentrated to give the product as a yellow oil in 91% yield (0.480 g); IR (film) 770, 1140, and 1320 cm–1; NMR (CDCl 3 ) δ 4.22 (q, 2 H), 1.61 and 1.59 (2 d, 6 H, J = 7 Hz), 7.3 (s, 4 H); m/e (rel. intensity) 196 (M+) (14), 132 (M-SO2) (100); MS analysis 196.055796 (calc.), 196.057587 (obs.).
https://en.wikipedia.org/wiki/Reductions_with_metal_alkoxyaluminium_hydrides
In organochlorine chemistry , reductive dechlorination describes any chemical reaction which cleaves the covalent bond between carbon and chlorine via reductants , to release chloride ions. Many modalities have been implemented, depending on the application. Reductive dechlorination is often applied to remediation of chlorinated pesticides or dry cleaning solvents . It is also used occasionally in the synthesis of organic compounds , e.g. as pharmaceuticals. Dechlorination is a well-researched reaction in organic synthesis , although it is not often used. Usually stoichiometric amounts of dechlorinating agent are required. In one classic application, the Ullmann reaction , chloroarenes are coupled to biphenyls . For example, the activated substrate 2-chloronitrobenzene is converted into 2,2'-dinitro biphenyl with a copper - bronze alloy . [ 1 ] [ 2 ] Zerovalent iron effects similar reactions. Organophosphorus(III) compounds effect gentle dechlorinations. The products are alkenes and phosphorus(V). [ 3 ] Vicinal reduction involves the removal of two halogen atoms that are adjacent on the same alkane or alkene , leading to the formation of an additional carbon-carbon bond. [ 5 ] Biological reductive dechlorination is often effected by certain species of bacteria . Sometimes the bacterial species are highly specialized for organochlorine respiration and even a particular electron donor, as in the case of Dehalococcoides and Dehalobacter . In other examples, such as Anaeromyxobacter , bacteria have been isolated that are capable of using a variety of electron donors and acceptors, with a subset of possible electron acceptors being organochlorines. [ 6 ] These reactions depend on a molecule which tends to be very aggressively sought after by some microbes, vitamin B12 . [ 7 ] Reductive dechlorination of chlorinated organic molecules is relevant to bioremediation of polluted groundwater. [ 8 ] [ 9 ] One example [ 10 ] is the organochloride respiration of the dry-cleaning solvent, tetrachloroethylene , and the engine degreasing solvent trichloroethylene by anaerobic bacteria , often members of the candidate genera Dehalococcoides . Bioremediation of these chloroethenes can occur when other microorganisms at the contaminated site provide H 2 as a natural byproduct of various fermentation reactions. The dechlorinating bacteria use this H 2 as their electron donor, ultimately replacing chlorine atoms in the chloroethenes with hydrogen atoms via hydrogenolytic reductive dechlorination. This process can proceed in the soil provided the availability of organic electron donors and the appropriate strains of Dehalococcoides . Trichloroethylene is dechlorinated via dichloroethene and vinyl chloride to ethylene . [ 11 ] A chloroform -degrading reductive dehalogenase enzyme has been reported in a Dehalobacter member. The chloroform reductive dehalogenase, termed TmrA, was found to be transcriptional up-regulated in response to chloroform respiration [ 12 ] and the enzyme can be obtained both in native [ 13 ] and recombinant forms. [ 14 ] Reductive dechlorination has been investigated for bioremediation of polychlorinated biphenyls (PCB) and chlorofluorocarbons (CFC). The reductive dechlorination of PCBs is performed by anaerobic microorganisms that utilize the PCB as an electron sink. The result of this is the reduction of the "meta" site, followed by the "para" site, and finally the "ortho" site, leading to a dechlorinated product. [ 15 ] [ 16 ] [ 17 ] In the Hudson River, microorganisms effect dechlorination over the course of weeks. The resulting monochlorobiphenyls and dichlorobiphenyls are less toxic and more easily degradable by aerobic organisms compared to their chlorinated counterparts. [ 17 ] The prominent drawback that has prevented the widespread use of reductive dechlorination for PCB detoxification and has decreased its feasibility is the issue of the slower than desired dechlorination rates. [ 16 ] It has been suggested that bioaugmentation with DF-1 can lead to enhanced reductive dechlorination rates of PCBs through stimulation of dechlorination. Additionally, high inorganic carbon levels do not affect dechlorination rates in low PCB concentration environments. [ 15 ] The reductive dechlorination applies to CFCs. [ 18 ] Reductive dechlorination of CFCs including CFC-11, CFC-113, chlorotrifluoroethene, CFC-12, HCFC-141b, and tetrachloroethene occur through hydrogenolysis . Reduction rates of CFC mirror theoretical rates calculated based on the Marcus theory of electron transfer rate. [ 19 ] The electrochemical reduction of chlorinated chemicals such as chlorinated hydrocarbons and chlorofluorocarbons can be carried out by electrolysis in appropriate solvents, such as mixtures of water and alcohol. Some of the key components of an electrolytic cell are types of electrodes, electrolyte mediums, and use of mediators. The cathode transfers electrons to the molecule, which decomposes to produce the corresponding hydrocarbon (hydrogen atoms substitute the original chlorine atoms) and free chloride ions. For instance, the reductive dechlorination of CFCs is complete and produces several hydrofluorocarbons (HFC) plus chloride. Hydrodechlorination (HDC) is a type of reductive dechlorination that is useful due to its high reaction rate. It uses H 2 as the reducing agent over a range of potential electrode reactors and catalysts . [ 20 ] Amongst the types of catalysts studied such as precious metals (platinum, palladium, rhodium), transition metals (niobium and molybdenum), and metal oxides , a preference for precious metals overrides the others. As an example, palladium often adopts a lattice formation which can easily embed hydrogen gas making it more accessible to be readily oxidized. [ 21 ] However a common issue for HDC is catalyst deactivation and regeneration. As catalysts are depleted, chlorine poisoning on surfaces can sometimes be observed, and on rare occasions, metal sintering and leaching occurs as a result. [ 22 ] Electrochemical reduction can be performed at ambient pressure and temperature. [ 23 ] This will not disrupt microbial environments or raise extra cost for remediation. The process of dechlorination can be highly controlled to avoid toxic chlorinated intermediates and byproducts such as dioxins from incineration . Trichloroethylene and perchloroethylene are common targets of treatment which are directly converted to environmentally benign products. Chlorinated alkenes and alkanes are converted to hydrogen chloride which is then neutralized with a base. [ 22 ] However, even though there are many potential benefits to adopting this method, research have mainly been conducted in a laboratory setting with a few cases of field studies making it not yet well established.
https://en.wikipedia.org/wiki/Reductive_dechlorination
Reductive elimination is an elementary step in organometallic chemistry in which the oxidation state of the metal center decreases while forming a new covalent bond between two ligands . It is the microscopic reverse of oxidative addition , and is often the product-forming step in many catalytic processes. Since oxidative addition and reductive elimination are reverse reactions, the same mechanisms apply for both processes, and the product equilibrium depends on the thermodynamics of both directions. [ 1 ] [ 2 ] Reductive elimination is often seen in higher oxidation states, and can involve a two-electron change at a single metal center (mononuclear) or a one-electron change at each of two metal centers (binuclear, dinuclear, or bimetallic). [ 1 ] [ 2 ] For mononuclear reductive elimination, the oxidation state of the metal decreases by two, while the d-electron count of the metal increases by two. This pathway is common for d 8 metals Ni(II), Pd(II), and Au(III) and d 6 metals Pt(IV), Pd(IV), Ir(III), and Rh(III). Additionally, mononuclear reductive elimination requires that the groups being eliminated must be cis to one another on the metal center. [ 3 ] For binuclear reductive elimination, the oxidation state of each metal decreases by one, while the d-electron count of each metal increases by one. This type of reactivity is generally seen with first row metals, which prefer a one-unit change in oxidation state, but has been observed in both second and third row metals. [ 4 ] As with oxidative addition, several mechanisms are possible with reductive elimination. The prominent mechanism is a concerted pathway, meaning that it is a nonpolar, three-centered transition state with retention of stereochemistry . In addition, an S N 2 mechanism, which proceeds with inversion of stereochemistry, or a radical mechanism, which proceeds with obliteration of stereochemistry, are other possible pathways for reductive elimination. [ 1 ] The rate of reductive elimination is greatly influenced by the geometry of the metal complex. In octahedral complexes, reductive elimination can be very slow from the coordinatively saturated center; and often, reductive elimination only proceeds via a dissociative mechanism, where a ligand must initially dissociate to make a five-coordinate complex. This complex adopts a Y-type distorted trigonal bipyramidal structure where a π-donor ligand is at the basal position and the two groups to be eliminated are brought very close together. After elimination, a T-shaped three-coordinate complex is formed, which will associate with a ligand to form the square planar four-coordinate complex. [ 5 ] Reductive elimination of square planar complexes can progress through a variety of mechanisms: dissociative , nondissociative, and associative . Similar to octahedral complexes, a dissociative mechanism for square planar complexes initiates with loss of a ligand, generating a three-coordinate intermediate that undergoes reductive elimination to produce a one-coordinate metal complex. For a nondissociative pathway, reductive elimination occurs from the four-coordinate system to afford a two-coordinate complex. If the eliminating ligands are trans to each other, the complex must first undergo a trans to cis isomerization before eliminating. In an associative mechanism, a ligand must initially associate with the four-coordinate metal complex to generate a five-coordinate complex that undergoes reductive elimination synonymous to the dissociation mechanism for octahedral complexes. [ 6 ] [ 7 ] Reductive elimination is sensitive to a variety of factors including: (1) metal identity and electron density, (2) sterics, (3) participating ligands, (4) coordination number , (5) geometry , and (6) photolysis /oxidation. Additionally, because reductive elimination and oxidative addition are reverse reactions, any sterics or electronics that enhance the rate of reductive elimination must thermodynamically hinder the rate of oxidative addition. [ 2 ] First-row metal complexes tend to undergo reductive elimination faster than second-row metal complexes, which tend to be faster than third-row metal complexes. This is due to bond strength, with metal-ligand bonds in first-row complexes being weaker than metal-ligand bonds in third-row complexes. Additionally, electron-poor metal centers undergo reductive elimination faster than electron-rich metal centers since the resulting metal would gain electron density upon reductive elimination. [ 8 ] Reductive elimination generally occurs more rapidly from a more sterically hindered metal center because the steric encumbrance is alleviated upon reductive elimination. Additionally, wide ligand bite angles generally accelerate reductive elimination because the sterics force the eliminating groups closer together, which allows for more orbital overlap . [ 9 ] Kinetics for reductive elimination are hard to predict, but reactions that involve hydrides are particularly fast due to effects of orbital overlap in the transition state. [ 10 ] Reductive elimination occurs more rapidly for complexes of three- or five-coordinate metal centers than for four- or six-coordinate metal centers. For even coordination number complexes, reductive elimination leads to an intermediate with a strongly metal-ligand antibonding orbital . When reductive elimination occurs from odd coordination number complexes, the resulting intermediate occupies a nonbonding molecular orbital . [ 11 ] Reductive elimination generally occurs faster for complexes whose structures resemble the product. [ 2 ] Reductive elimination can be induced by oxidizing the metal center to a higher oxidation state via light or an oxidant. [ 12 ] Reductive elimination has found widespread application in academia and industry, most notable being hydrogenation , [ 13 ] the Monsanto acetic acid process , [ 14 ] hydroformylation , [ 15 ] and cross-coupling reactions . [ 16 ] In many of these catalytic cycles, reductive elimination is the product forming step and regenerates the catalyst; however, in the Heck reaction [ 17 ] and Wacker process , [ 18 ] reductive elimination is involved only in catalyst regeneration, as the products in these reactions are formed via β–hydride elimination .
https://en.wikipedia.org/wiki/Reductive_elimination
Reductive stress (RS) is defined as an abnormal accumulation of reducing equivalents despite being in the presence of intact oxidation and reduction systems. [ 1 ] A redox reaction involves the transfer of electrons from reducing agents (reductants) to oxidizing agents (oxidants) and redox couples are accountable for the majority of the cellular electron flow. [ 2 ] RS is a state where there are more reducing equivalents compared to reductive oxygen species (ROS) in the form of known biological redox couples such as GSH/GSSG, NADP+/NADPH, and NAD+/NADH. [ 1 ] Reductive stress is the counterpart to oxidative stress , where electron acceptors are expected to be mostly reduced. [ 3 ] Reductive stress is likely derived from intrinsic signals that allow for the cellular defense against pro-oxidative conditions. [ 4 ] There is a feedback regulation balance between reductive and oxidative stress where chronic RS induce oxidative species (OS), resulting in an increase in production of RS, again. There are several implications of an excess of reducing equivalents: regulation of cellular signaling pathways by decreasing cell growth responses, modification of transcriptional activity, perturbs disulfide bond formation within proteins, increase of mitochondrial malfunction, decrease in cellular metabolism, and cytotoxicity. [ 1 ] [ 5 ] The over expression of antioxidant enzymatic systems promote the excess production of reducing equivalents resulting in the depletion of ROS and prompting RS in cells. Nuclear factor erythroid 2–related factor 2 (Nrf2) is an important transcription factor that regulates a multitude of genes that code for antioxidant response and after uncontrolled amplification of this signaling pathway RS increases. [ 6 ] [ 7 ] Although different organelles may each have a different redox status, through probing for factors such as glutathione and hydrogen peroxide (H 2 O 2 ), it was determined that reductive stress is present in the endoplasmic reticulum (ER) of senescent cells. Reductive stress is significant in the aging process of a cell and when ER oxidation status is elevated, cellular aging is slowed. [ 8 ] In particular, when reductive stress is increased, it may result in many downstream effects such as increased apoptosis , decreased cell survival, and mitochondrial dysfunction —all of which need to be properly regulated to ensure that the needs of the cell are met. [ 9 ] Data shows, in an isolated mitochondria, when there is a high ratio of NADH/NAD+, an example of RS, ROS increases significantly in the mitochondrial matrix which results in H2O2 spillover from the mitochondria. [ 4 ] Reductive stress has even been suggested to lead to higher probability of cardiomyopathy in humans. This has also been mysteriously linked to the abundant presence of heat shock protein 27 (Hsp27), suggesting that high levels of Hsp27 induce can induce cardiomyopathy. [ 10 ] Reductive stress is present in many diseases with abnormalities such as the increase of reducing equivalents, resulting in issues such as hypoxia -induced oxidative stress. [ 8 ] A more reductive redox environment promotes cancer metastasis and cancer cells use reductive stress to promote growth and resist anti-cancer agents, such as chemotherapy and radiotherapy. [ 4 ]
https://en.wikipedia.org/wiki/Reductive_stress
A reductone is a special class of organic compounds . They are enediols with a carbonyl group adjacent to the enediol group, i.e. RC(OH)=C(OH)-C(O)R. The enediol structure is stabilized by the resonance resulting from the tautomerism with the adjacent carbonyl. Therefore, the chemical equilibrium produces mainly the enediol form rather than the keto form. [ 1 ] Reductones are reducing agents , thus efficacious antioxidants . Some are fairly strong acids. [ 2 ] Examples of reductones are tartronaldehyde, reductic acid and ascorbic acid.
https://en.wikipedia.org/wiki/Reductone
The redundancy principle in biology [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] expresses the need of many copies of the same entity ( cells , molecules , ions ) to fulfill a biological function . Examples are numerous: disproportionate numbers of spermatozoa during fertilization compared to one egg, large number of neurotransmitters released during neuronal communication compared to the number of receptors , large numbers of released calcium ions during transient in cells, and many more in molecular and cellular transduction or gene activation and cell signaling . This redundancy is particularly relevant when the sites of activation are physically separated from the initial position of the molecular messengers. The redundancy is often generated for the purpose of resolving the time constraint of fast-activating pathways. It can be expressed in terms of the theory of extreme statistics to determine its laws and quantify how the shortest paths are selected. The main goal is to estimate these large numbers from physical principles and mathematical derivations. When a large distance separates the source and the target (a small activation site), the redundancy principle explains that this geometrical gap can be compensated by large number. Had nature used less copies than normal, activation would have taken a much longer time, as finding a small target by chance is a rare event and falls into narrow escape problems . [ 10 ] The time for the fastest particles to reach a target in the context of redundancy depends on the numbers and the local geometry of the target. In most of the time, it is the rate of activation. This rate should be used instead of the classical Smoluchowski 's rate describing the mean arrival time, but not the fastest. The statistics of the minimal time to activation set kinetic laws in biology, which can be quite different from the ones associated to average times. The motion of a particle located at position X t {\displaystyle X_{t}} can be described by the Smoluchowski's limit of the Langevin equation : [ 11 ] [ 12 ] d X t = 2 D d B t + 1 γ F ( x ) d t , {\displaystyle dX_{t}={\sqrt {2D}}\,dB_{t}+{\frac {1}{\gamma }}F(x)dt,} where D {\displaystyle D} is the diffusion coefficient of the particle, γ {\displaystyle \gamma } is the friction coefficient per unit of mass, F ( x ) {\displaystyle F(x)} the force per unit of mass, and B t {\displaystyle B_{t}} is a Brownian motion . This model is classically used in molecular dynamics simulations. x n + 1 = { x n − a , with probability l ( x n ) x n + b , with probability r ( x n ) {\displaystyle {\begin{aligned}x_{n+1}={\begin{cases}x_{n}-a,&{\text{with probability }}l(x_{n})\\x_{n}+b,&{\text{ with probability }}r(x_{n})\end{cases}}\end{aligned}}} , which is for example a model of telomere length dynamics. Here r ( x ) = 1 1 + β x , {\displaystyle r(x)={\frac {1}{1+\beta x}},} , with r ( x ) + l ( x ) = 1 {\displaystyle r(x)+l(x)=1} . [ 13 ] X ˙ = v 0 u , {\displaystyle {\dot {X}}=v_{0}{\bf {u,}}} where u {\displaystyle {\bf {u}}} is a unit vector chosen from a uniform distribution. Upon hitting an obstacle at a boundary point X 0 ∈ ∂ Ω {\displaystyle X_{0}\in \partial \Omega } , the velocity changes to X ˙ = v 0 v , {\displaystyle {\dot {X}}=v_{0}{\bf {v,}}} where v {\displaystyle {\bf {v}}} is chosen on the unit sphere in the supporting half space at X 0 {\displaystyle X_{0}} from a uniform distribution, independently of u {\displaystyle {\bf {u}}} . This rectilinear with constant velocity is a simplified model of spermatozoon motion in a bounded domain Ω {\displaystyle \Omega } . Other models can be diffusion on graph, active graph motion. [ 14 ] The mathematical analysis of large numbers of molecules, which are obviously redundant in the traditional activation theory, is used to compute the in vivo time scale of stochastic chemical reactions . The computation relies on asymptotics or probabilistic approaches to estimate the mean time of the fastest to reach a small target in various geometries. [ 15 ] [ 16 ] [ 17 ] With N non-interacting i.i.d. Brownian trajectories (ions) in a bounded domain Ω that bind at a site, the shortest arrival time is by definition τ 1 = min ( t 1 , … , t N ) , {\displaystyle \tau ^{1}=\min(t_{1},\ldots ,t_{N}),} where t i {\displaystyle t_{i}} are the independent arrival times of the N ions in the medium. The survival distribution of arrival time of the fastest P r ( τ 1 > t ) {\displaystyle Pr(\tau ^{1}>t)} is expressed in terms of a single particle, P r ( τ 1 > t ) = P r N ( t 1 > t ) {\displaystyle Pr(\tau ^{1}>t)=Pr^{N}(t_{1}>t)} . Here P r { t 1 > t } {\displaystyle Pr\{t_{1}>t\}} is the survival probability of a single particle prior to binding at the target.This probability is computed from the solution of the diffusion equation in a domain Ω {\displaystyle \Omega } : ∂ p ( x , t ) ∂ t = D Δ p ( x , t ) for x ∈ Ω , t > 0 {\displaystyle {\frac {\partial p(x,t)}{\partial t}}=D\Delta p(x,t){\hbox{ for }}x\in \Omega ,t>0} p ( x , 0 ) = p 0 ( x ) for x ∈ Ω ∂ p ∂ n ( x , t ) = 0 for x ∈ ∂ Ω r p ( x , t ) = 0 for x ∈ ∂ Ω a , {\displaystyle {\begin{aligned}p(x,0)=&p_{0}(x){\hbox{ for }}x\in \Omega \\{\frac {\partial p}{\partial n}}(x,t)&=0{\hbox{ for }}x\in \partial \Omega _{r}\\p(x,t)&=0{\hbox{ for }}x\in \partial \Omega _{a},\end{aligned}}} where the boundary ∂ Ω {\displaystyle \partial \Omega } contains NR binding sites ∂ Ω i ⊂ ∂ Ω {\displaystyle \partial \Omega _{i}\subset \partial \Omega } ( ∂ Ω a = ⋃ i = 1 N R ∂ Ω i , ∂ Ω r = ∂ Ω − ∂ Ω a {\displaystyle \partial \Omega _{a}=\bigcup \limits _{i=1}^{N_{R}}\partial \Omega _{i},\ \partial \Omega _{r}=\partial \Omega -\partial \Omega _{a}} ). The single particle survival probability is Pr { t 1 > t } = ∫ Ω p ( x , t ) d x , {\displaystyle \Pr\{t_{1}>t\}=\int \limits _{\Omega }p(x,t)dx,} so that Pr { τ 1 = t } = d d t Pr { τ 1 < t } = N ( Pr { t 1 > t } ) N − 1 Pr { t 1 = t } , {\displaystyle \Pr\{\tau ^{1}=t\}={\frac {d}{dt}}\Pr\{\tau ^{1}<t\}=N(\Pr\{t_{1}>t\})^{N-1}\Pr \limits \{t_{1}=t\},} where Pr { t 1 = t } = ∮ ∂ Ω a ∂ p ( x , t ) ∂ n d S x {\displaystyle \Pr\{t_{1}=t\}={\oint _{\partial \Omega _{a}}}{\frac {\partial p(x,t)}{\partial n}}\,dS_{x}} and Pr { t 1 = t } = N R ∮ ∂ Ω 1 ∂ p ( x , t ) ∂ n d S x {\displaystyle \Pr\{t_{1}=t\}=N_{R}{\oint _{\partial \Omega _{1}}}{\frac {\partial p(x,t)}{\partial n}}\,dS_{x}} . The probability density function (pdf) of the arrival time is Pr { τ 1 = t } = N N R [ ∫ Ω p ( x , t ) d x ] N − 1 ∮ ∂ Ω 1 ∂ p ( x , t ) ∂ n d S x , {\displaystyle \Pr\{\tau ^{1}=t\}=NN_{R}\left[\int \limits _{\Omega }p(x,t)dx\right]^{N-1}\oint \limits _{\partial \Omega _{1}}{\frac {\partial p(x,t)}{\partial n}}dS_{x},} which gives the MFPT τ ¯ 1 = ∫ 0 ∞ Pr { τ 1 > t } d t = ∫ 0 ∞ [ Pr { t 1 > t } ] N d t . {\displaystyle {\bar {\tau }}^{1}=\int \limits \limits _{0}^{\infty }\Pr\{\tau ^{1}>t\}dt=\int \limits _{0}^{\infty }\left[\Pr\{t_{1}>t\}\right]^{N}dt.} The probability Pr { t 1 > t } {\displaystyle \Pr\{t_{1}>t\}} can be computed using short-time asymptotics of the diffusion equation as shown in the next sections. The short-time asymptotic of the diffusion equation is based on the ray method approximation. For an semi-interval [ 0 , ∞ [ {\displaystyle [0,\infty [} , the survival pdf is solution of ∂ ( x , t ) ∂ t = D ∂ 2 p ( x , t ) ∂ x 2 for x > 0 , t > 0 p ( x , 0 ) = δ ( x − a ) for x > 0 , p ( 0 , t ) = 0 for t > 0 , {\displaystyle {\begin{aligned}{\frac {\partial (x,t)}{\partial t}}&=D{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}}\quad {\mbox{ for }}x>0,\ t>0\\p(x,0)&=\delta (x-a)\quad {\mbox{ for }}\ x>0,\quad p(0,t)=0\quad {\mbox{ for }}t>0,\end{aligned}}} that is p ( x , t ) = 1 4 D π t [ exp ⁡ { − ( x − a ) 2 4 D t } − exp ⁡ { − ( x + a ) 2 4 D t } ] . {\displaystyle p(x,t)={\frac {1}{\sqrt {4D\pi t}}}\left[\exp \left\{-{\frac {(x-a)^{2}}{4Dt}}\right\}-\exp \left\{-{\frac {(x+a)^{2}}{4Dt}}\right\}\right].} The survival probability with D=1 is Pr { t 1 > t } = ∫ 0 ∞ p ( x , t ) d x = 1 − 2 π ∫ a / 4 t ∞ e − u 2 d u {\displaystyle \Pr\{t_{1}>t\}=\int \limits \limits _{0}^{\infty }p(x,t)\,dx=1-{\frac {2}{\sqrt {\pi }}}\int \limits \limits _{a/{\sqrt {4t}}}^{\infty }e^{-u^{2}}\,du} . To compute the MFPT, we expand the complementary error function 2 π ∫ x ∞ e − u 2 d u = e − x 2 x π ( 1 − 1 2 x 2 + O ( x − 4 ) ) for x ≫ 1 , {\displaystyle {\frac {2}{\sqrt {\pi }}}\int \limits \limits _{x}^{\infty }e^{-u^{2}}\,du={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left(1-{\frac {1}{2x^{2}}}+O(x^{-4})\right)\quad {\mbox{for}}\ x\gg 1,} which gives τ ¯ 1 = ∫ 0 ∞ [ Pr { t 1 > t } ] N d t ≈ ∫ 0 ∞ exp ⁡ { N ln ⁡ ( 1 − e − ( a / 4 t ) 2 ( a / 4 t ) π ) } d t ≈ a 2 4 ∫ 0 ∞ exp ⁡ { − N u e − 1 u π } d u {\displaystyle {\bar {\tau }}^{1}=\int \limits \limits _{0}^{\infty }\left[\Pr\{t_{1}>t\}\right]^{N}dt\approx \int \limits \limits _{0}^{\infty }\exp \left\{N\ln \left(1-{\frac {e^{-(a/{\sqrt {4t}})^{2}}}{(a/{\sqrt {4t}}){\sqrt {\pi }}}}\right)\right\}\,dt\approx {\frac {a^{2}}{4}}\int \limits \limits _{0}^{\infty }\exp \left\{-N{\frac {{\sqrt {u}}e^{-{\frac {1}{u}}}}{\sqrt {\pi }}}\right\}du} , leading (the main contribution of the integral is near 0) to τ ¯ 1 ≈ a 2 4 D ln ⁡ N π for N ≫ 1. {\displaystyle {\bar {\tau }}^{1}\approx {\frac {a^{2}}{4D\ln {\frac {N}{\sqrt {\pi }}}}}\quad {\mbox{for}}\ N\gg 1.} This result is reminiscent of using the Gumbel's law. Similarly, escape from the interval [0,a] is computed from the infinite sum p ( x , t | y ) = 1 4 D π t ∑ n = − ∞ ∞ [ exp ⁡ { − ( x − y + 2 n a ) 2 4 t } − exp ⁡ { − ( x + y + 2 n a ) 2 4 t } ] {\displaystyle p(x,t\,|\,y)={\frac {1}{\sqrt {4D\pi t}}}\sum \limits _{n=-\infty }^{\infty }\left[\exp \left\{-{\frac {(x-y+2na)^{2}}{4t}}\right\}-\exp \left\{-{\frac {(x+y+2na)^{2}}{4t}}\right\}\right]} .The conditional survival probability is approximated by [ 1 ] Pr { t 1 > t | y } = ∫ 0 a p ( x , t | y ) d x d s ∼ 1 − max 2 t π [ e − y 2 / 4 t y , e − ( a − y ) 2 / 4 t a − y ] as t → 0 {\displaystyle \Pr\{t_{1}>t\,|\,y\}=\int \limits \limits _{0}^{a}p(x,t\,|\,y)\,dxds\sim 1-\max {\frac {2{\sqrt {t}}}{\sqrt {\pi }}}\left[{\frac {e^{-y^{2}/4t}}{y}},{\frac {e^{-(a-y)^{2}/4t}}{a-y}}\right]\quad {\mbox{as}}\ t\to 0} , where the maximum occurs at δ = {\displaystyle \delta =} min[y,a-y] for 0<y<a (the shortest ray from y to the boundary). All other integrals can be computed explicitly, leading to τ ¯ 1 = ∫ 0 ∞ [ Pr { t 1 > t } ] N d t ≈ ∫ 0 ∞ exp ⁡ { N ln ⁡ ( 1 − 8 t δ π e − δ 2 / 16 t ) } d t ≈ δ 2 16 D ln ⁡ 2 N π for N ≫ 1. {\displaystyle {\bar {\tau }}^{1}=\int \limits \limits _{0}^{\infty }\left[\Pr\{t_{1}>t\}\right]^{N}dt\approx \int \limits \limits _{0}^{\infty }\exp \left\{N\ln \left(1-{\frac {8{\sqrt {t}}}{\delta {\sqrt {\pi }}}}e^{-\delta ^{2}/16t}\right)\right\}dt\approx {\frac {\delta ^{2}}{16D\ln {\frac {2N}{\sqrt {\pi }}}}}\quad {\mbox{for}}\ N\gg 1.} The arrival times of the fastest among many Brownian motions are expressed in terms of the shortest distance from the source S to the absorbing window A, measured by the distance δ m i n = d ( S , A ) , {\displaystyle \delta _{min}=d(S,A),} where d is the associated Euclidean distance . Interestingly, trajectories followed by the fastest are as close as possible from the optimal trajectories. In technical language, the associated trajectories of the fastest among N, concentrate near the optimal trajectory (shortest path) when the number N of particles increases. For a diffusion coefficient D and a window of size a, the expected first arrival times of N identically independent distributed Brownian particles initially positioned at the source S are expressed in the following asymptotic formulas : τ ¯ d 1 ≈ δ m i n 2 4 D ln ⁡ ( N π ) , in dim 1, valid for N ≫ 1 , {\displaystyle {\bar {\tau }}^{d1}\approx {\frac {\delta _{min}^{2}}{4D\ln \left({\frac {N}{\sqrt {\pi }}}\right)}},{\hbox{in dim 1, valid for}}N\gg 1,} τ ¯ d 2 ≈ δ m i n 2 4 D log ⁡ ( π 2 N 8 log ⁡ ( 1 a ) ) , in dim 2 for N log ⁡ ( 1 ϵ ) ≫ 1 , {\textstyle {\bar {\tau }}^{d2}\approx {\frac {\delta _{min}^{2}}{4D\log \left({\frac {\pi {\sqrt {2}}N}{8\log \left({\frac {1}{a}}\right)}}\right)}},{\hbox{ in dim 2 for }}{\frac {N}{\log({\frac {1}{\epsilon }})}}\gg 1,} τ ¯ d 3 ≈ δ m i n 2 4 D log ⁡ ( N 4 a 2 π 1 / 2 δ m i n 2 ) , in dim 3 , for N a 2 δ m i n 2 ≫ 1. {\displaystyle {\bar {\tau }}^{d3}\approx {\frac {\delta _{min}^{2}}{4D{\log \left(N{\frac {4a^{2}}{\pi ^{1/2}\delta _{min}^{2}}}\right)}}},{\hbox{ in dim }}3,{\hbox{ for }}{\frac {Na^{2}}{\delta _{min}^{2}}}\gg 1.} These formulas show that the expected arrival time of the fastest particle is in dimension 1 and 2, O(1/\log(N)). They should be used instead of the classical forward rate in models of activation in biochemical reactions. The method to derive formulas is based on short-time asymptotic and the Green's function representation of the Helmholtz equation. Note that other distributions could lead to other decays with respect N. The optimal paths for the fastest can be found using the Wencell-Freidlin functional in the Large-deviation theory. These paths correspond to the short-time asymptotics of the diffusion equation from a source to a target. In general, the exact solution is hard to find, especially for a space containing various distribution of obstacles. The Wiener integral representation of the pdf for a pure Brownian motion is obtained for a zero drift and diffusion tensor σ = D {\displaystyle \sigma =D} constant, so that it is given by the probability of a sampled path until it exits at the small window ∂ Ω a {\displaystyle \partial \Omega _{a}} at the random time T P r { x N ( t 1 , M ) ∈ Ω , x N ( t 2 , M ) ∈ Ω , … , x M ( t ) = x , t ≤ T ≤ t + Δ t | x ( 0 ) = y } {\displaystyle Pr\{x_{N}(t_{1,M})\in \Omega ,{x}_{N}(t_{2,M})\in \Omega ,\dots ,x_{M}(t)=x,t\leq T\leq t+\Delta t|x(0)=y\}} = [ ∫ Ω ⋯ ∫ Ω ∏ j = 1 M d y j ( 2 π Δ t ) n det σ ( x ) ( t j − 1 , M ) ) exp ⁡ { − 1 2 Δ t [ y j − x ( t j − 1 , N ) − a ( x ( t j − 1 , N ) ) Δ t ] T σ − 1 ( x ( t j − 1 , N ) ) [ y j − x ( t j − 1 , N ) − a ( x ( t j − 1 , N ) ) Δ t ] } {\displaystyle =[\int \limits _{\Omega }\cdots \int \limits \limits _{\Omega }\prod _{j=1}^{M}{\frac {d{y}_{j}}{\sqrt {(2\pi \Delta t)^{n}\det {\sigma }(x)(t_{j-1,M}))}}}\exp\{-{\frac {1}{2\Delta t}}\left[{y}_{j}-x(t_{j-1,N})-{a}({x}(t_{j-1,N}))\Delta t\right]^{T}{\sigma }^{-1}(x(t_{j-1,N}))\left[{y}_{j}-x(t_{j-1,N})-{a}(x(t_{j-1,N}))\Delta t\right]\}} where Δ t = t / M , t j , N = j Δ t , x ( t 0 , N ) = y and y j = x ( t j , N ) {\displaystyle \Delta t=t/M,t_{j,N}=j\Delta t,\ x(t_{0,N})=y{\hbox{ and }}{y}_{j}=x(t_{j,N})} in the product and T is the exit time in the narrow absorbing window ∂ Ω a . {\displaystyle \partial \Omega _{a}.} Finally, ⟨ τ ( n ) ⟩ = ∫ 0 ∞ exp ⁡ { n log ⁡ ∫ Ω p ( x , t | y ) d x } d t = ∫ 0 ∞ τ σ P r { Path σ ∈ S n ( y ) , τ σ = t } d t , {\displaystyle \langle \tau ^{(n)}\rangle =\int \limits \limits _{0}^{\infty }\exp \left\{n\log \int \limits _{\Omega }p(x,t|y)\,dx\right\}dt=\int _{0}^{\infty }\tau _{\sigma }Pr\{{\hbox{ Path }}\sigma \in S_{n}(y),\tau _{\sigma }=t\}dt,} where S n ( y ) {\displaystyle S_{n}(y)} is the ensemble of shortest paths selected among n Brownian trajectories, starting at point y and exiting between time t and t+dt from the domain Ω {\displaystyle \Omega } . The probability P r { Path σ ∈ S n } {\displaystyle Pr\{{\hbox{ Path }}\sigma \in S_{n}\}} is used to show that the empirical stochastic trajectories of S n {\displaystyle S_{n}} concentrate near the shortest paths starting from y and ending at the small absorbing window ∂ Ω a {\displaystyle \partial \Omega _{a}} , under the condition that ϵ = | ∂ Ω a | | ∂ Ω | ≪ 1 {\displaystyle \epsilon ={\frac {|\partial \Omega _{a}|}{|\partial \Omega |}}\ll 1} .  The paths of S n ( y ) {\displaystyle S_{n}(y)} can be approximated using discrete broken lines among a finite number of points and we denote the associated ensemble by S ~ n ( y ) {\displaystyle {\tilde {S}}_{n}(y)} .  Bayes' rule leads to P r { Path σ ∈ S ~ n ( y ) | t < τ σ < t + d t } = ∑ m = 0 ∞ P r { Path σ ∈ S ~ n ( y ) | m , t < τ σ < t + d t } P r { m steps } {\displaystyle Pr\{{\hbox{ Path }}\sigma \in {\tilde {S}}_{n}(y)|t<\tau _{\sigma }<t+dt\}=\sum _{m=0}^{\infty }Pr\{{\hbox{ Path }}\sigma \in {\tilde {S}}_{n}(y)|m,t<\tau _{\sigma }<t+dt\}Pr\{m{\mbox{ steps}}\}} where P r { m steps } = P r { the paths of S ~ n ( y ) exit in m steps } {\displaystyle Pr\{m{\mbox{ steps}}\}=Pr\{{\mbox{the paths of }}{\tilde {S}}_{n}(y){\mbox{exit in m steps}}\}} is the probability that a path of S ~ n ( y ) {\displaystyle {\tilde {S}}_{n}(y)} exits in m-discrete time steps. A path made of broken lines (random walk with a time step Δ t {\displaystyle \Delta t} ) can be expressed using Wiener path-integral.  The probability of a Brownian path x(s) can be expressed in the limit of a path-integral with the functional: P r { x ( s ) | s ∈ [ 0 , t ] } ≈ exp ⁡ ( − ∫ 0 t | x ˙ | 2 d s ) . {\displaystyle Pr\{x(s)|s\in [0,t]\}\approx \exp \left(-\int _{0}^{t}|{\dot {x}}|^{2}ds\right).} The Survival probability conditioned on starting at y is given by the Wiener representation: S ( t | x 0 ) = ∫ x ∈ Ω d x ∫ x ( 0 ) x ( t ) = x D ( x ) exp ⁡ ( − ∫ 0 t | x ˙ | 2 d s ) , {\displaystyle S(t|x_{0})=\int _{x\in \Omega }dx\int _{x(0)}^{x(t)=x}{\mathcal {D}}(x)\exp \left(-\int _{0}^{t}|{\dot {x}}|^{2}ds\right),} where D ( x ) {\displaystyle {\mathcal {D}}(x)} is the limit Wiener measure: the exterior integral is taken over all end points x and the path integral is over all paths starting from x(0). When we consider n-independent paths ( σ 1 , . . σ n ) {\displaystyle (\sigma _{1},..\sigma _{n})} (made of points with a time step Δ t {\displaystyle \Delta t} that exit in m-steps, the probability of such an event is P r { σ 1 , . . σ n ∈ S n ( y ) | m , τ σ = m Δ t } = ( ∫ y 0 = y ⋯ ∫ y j ∈ Ω ∫ y n ∈ ∂ Ω a 1 ( 4 D Δ t ) d m / 2 ∏ j = 1 m exp ⁡ { − 1 4 D Δ t [ | y j − y j − 1 ) | 2 ] } ) n {\displaystyle Pr\{\sigma _{1},..\sigma _{n}\in S_{n}(y)|m,\tau _{\sigma }=m\Delta t\}=\left(\int \limits _{y_{0}=y}\cdots \int \limits _{{y}_{j}\in \Omega }\int \limits _{{y}_{n}\in \partial \Omega _{a}}{\frac {1}{(4D\Delta t)^{dm/2}}}\prod _{j=1}^{m}\exp {\Bigg \{}-{\frac {1}{4D\Delta t}}\left[|{y}_{j}-{y}_{j-1})|^{2}\right]\}\right)^{n}} ≈ ( 1 ( 4 D Δ t ) d m / 2 ) n ∫ x D ( x ) exp ⁡ { − n ∫ 0 m Δ t x ˙ 2 d s } {\displaystyle \approx \left({\frac {1}{(4D\Delta t)^{dm/2}}}\right)^{n}\int _{x}{\mathcal {D}}(x)\exp {\Bigg \{}-n\int \limits _{0}^{m\Delta t}{\dot {x}}^{2}ds{\Bigg \}}} .Indeed, when there are n paths of m steps, and the fastest one escapes in m-steps, they should all exit in m steps. Using the limit of path integral, we get heuristically the representation P r { Path σ ∈ S ~ n ( y ) | m , τ σ = m Δ t } = ( ∫ y 0 = y ⋯ ∫ y j ∈ Ω ∫ y n ∈ ∂ Ω a 1 ( 4 D Δ t ) d m / 2 ∏ j = 1 m exp ⁡ ( − 1 4 D Δ t [ | y j − y j − 1 ) | 2 ] ) ) n {\displaystyle Pr\{{\hbox{ Path }}\sigma \in {\tilde {S}}_{n}(y)|m,\tau _{\sigma }=m\Delta t\}=\left(\int \limits _{{y}_{0}=y}\cdots \int \limits _{{y}_{j}\in \Omega }\int \limits _{{y}_{n}\in \partial \Omega _{a}}{\frac {1}{(4D\Delta t)^{dm/2}}}\prod _{j=1}^{m}\exp(-{\frac {1}{4D\Delta t}}\left[|{y}_{j}-{y}_{j-1})|^{2}\right])\right)^{n}} ≈ ∫ x ∈ Ω d x ∫ x ( 0 ) = y x ( t ) = x D ( x ) exp ⁡ ( − n ∫ 0 m Δ t x ˙ 2 d s ) , {\displaystyle \approx \int _{x\in \Omega }dx\int _{x(0)=y}^{x(t)=x}{D}(x)\exp(-n\int \limits _{0}^{m\Delta t}{\dot {x}}^{2}ds),} where the integral is taken over all paths starting at y(0) and exiting at time m Δ t {\displaystyle m\Delta t} . This formula suggests that when n is large, only the paths that minimize the integrant will contribute. For large n, this formula suggests that paths that will contribute the most are the ones that will minimize the exponent, which allows selecting the paths for which the energy functional is minimal, that is E = min X ∈ P t ∫ 0 T x ˙ 2 d s , {\displaystyle E=\min _{X\in {\mathcal {P}}_{t}}\int \limits _{0}^{T}{\dot {x}}^{2}ds,} where the integration is taken over the ensemble of regular paths P t {\displaystyle {\mathcal {P}}_{t}} inside Ω {\displaystyle \Omega } starting at y and exiting in ∂ Ω a {\displaystyle \partial \Omega _{a}} , defined as P T = { P ( 0 ) = y , P ( T ) ∈ ∂ Ω a and P ( s ) ∈ Ω and 0 ≤ s ≤ T } . {\displaystyle {\mathcal {P}}_{T}=\{P(0)=y,P(T)\in \partial \Omega _{a}{\hbox{ and }}P(s)\in \Omega {\hbox{ and }}0\leq s\leq T\}.} This formal argument shows that the random paths associated to the fastest exit time are concentrated near the shortest paths. Indeed, the Euler-Lagrange equations for the extremal problem are the classical geodesics between y and a point in the narrow window ∂ Ω a {\displaystyle \partial \Omega _{a}} . The formula for the fastest escape can generalize to the case where the absorbing window is located in funnel cusp and the initial particles are distributed outside the cusp. The cusp has a size ϵ {\displaystyle \epsilon } in the opening and a curvature R. The diffusion coefficient is D. The shortest arrival time, valid for large n is given by τ ( n ) ≈ π 2 R 3 4 ϵ D ( 1 − cos ⁡ ( c ϵ ~ ) ϵ ~ ) 2 log ⁡ ( 2 n π ) . {\displaystyle \tau ^{(n)}\approx {\frac {\pi ^{2}R^{3}}{4\epsilon D({\frac {1-\cos(c{\sqrt {\tilde {\epsilon }}})}{\tilde {\epsilon }}})^{2}\log({\frac {2n}{\sqrt {\pi }}})}}.} Here ϵ ~ = ϵ R {\displaystyle {\tilde {\epsilon }}={\frac {\epsilon }{R}}} and c is a constant that depends on the diameter of the domain. The time taken by the first arrivers is proportional to the reciprocal of the size of the narrow target ϵ {\displaystyle \epsilon } . This formula is derived for fixed geometry and large n and not in the opposite limit of large n and small epsilon. [ 18 ] How nature sets the disproportionate numbers of particles remain unclear, but can be found using the theory of diffusion. One example is the number of neurotransmitters around 2000 to 3000 released during synaptic transmission, that are set to compensate the low copy number of receptors, so the probability of activation is restored to one. [ 19 ] [ 20 ] In natural processes these large numbers should not be considered wasteful, but are necessary for generating the fastest possible response and make possible rare events that otherwise would never happen. This property is universal, ranging from the molecular scale to the population level. [ 21 ] Nature's strategy for optimizing the response time is not necessarily defined by the physics of the motion of an individual particle, but rather by the extreme statistics, that select the shortest paths. In addition, the search for a small activation site selects the particle to arrive first: although these trajectories are rare, they are the ones that set the time scale. We may need to reconsider our estimation toward numbers when punctioning nature in agreement with the redundant principle that quantifies the request to achieve the biological function. [ 21 ]
https://en.wikipedia.org/wiki/Redundancy_principle_(biology)
A Redundant Array of Inexpensive Servers (RAIS) or Redundant Array of Independent Nodes (RAIN) is the use of multiple servers to maintain service if one server fails. This is similar in concept to how RAID turns a cluster of ordinary disks into a single block device. RAIS was designed to provide the benefits of a symmetric multiprocessor system (SMP) at the entry cost of computer clusters . The term may imply some kind of load balancing between the servers. RAIS is a simple, high performance, mainframe-grade alternative to more expensive enterprise computing infrastructure solutions. It turns an array of ordinary servers into a single virtual machine, similar in concept to how RAID turns a cluster of ordinary disks into a single block device. Every RAIS node is a stateless computing unit. RAIS stripes and mirrors application code and memory across an array of ordinary servers using the standard RAID schemata of level 0, level 1, level 5, level 1+0. This is possible through a memory management system called Versioned Memory. [ citation needed ] Data blocks of each stream are striped across the array servers. A fast packet switch is used to connect server and client stations. Each server has a dedicated network segment, and each client contacts the server one by one. Each server has its own storage, CPU, and network segment. The server capacity increases with the number of servers. In a manner more usually associated with high cost SMP architectures, RAIS achieves this by turning a cluster of independent servers into a single large server running applications across a virtualised network of nodes. The concept is similar to how RAID stripes and mirrors data across multiple independent disks and code of an application program across multiple independent nodes of a cluster. The applications see only a single logical shared memory which functions as a binary compatible SMP system. If the data is split into segments and distributed between multiple storage providers, the technology has applications for improving confidentiality in cloud computing. [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Redundant_Array_of_Inexpensive_Servers
CORDIC ( coordinate rotation digital computer ), Volder's algorithm , Digit-by-digit method , Circular CORDIC ( Jack E. Volder ), [ 1 ] [ 2 ] Linear CORDIC , Hyperbolic CORDIC (John Stephen Walther), [ 3 ] [ 4 ] and Generalized Hyperbolic CORDIC ( GH CORDIC ) (Yuanyong Luo et al.), [ 5 ] [ 6 ] is a simple and efficient algorithm to calculate trigonometric functions , hyperbolic functions , square roots , multiplications , divisions , and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms . CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and field-programmable gate arrays or FPGAs), as the only operations they require are additions , subtractions , bitshift and lookup tables . As such, they all belong to the class of shift-and-add algorithms . In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons. Similar mathematical techniques were published by Henry Briggs as early as 1624 [ 7 ] [ 8 ] and Robert Flower in 1771, [ 9 ] but CORDIC is better optimized for low-complexity finite-state CPUs. CORDIC was conceived in 1956 [ 10 ] [ 11 ] by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber 's navigation computer with a more accurate and faster real-time digital solution. [ 11 ] Therefore, CORDIC is sometimes referred to as a digital resolver . [ 12 ] [ 13 ] In his research Volder was inspired by a formula in the 1946 edition of the CRC Handbook of Chemistry and Physics : [ 11 ] where φ {\displaystyle \varphi } is such that tan ⁡ ( φ ) = 2 − n {\displaystyle \tan(\varphi )=2^{-n}} , and K n := 1 + 2 − 2 n {\displaystyle K_{n}:={\sqrt {1+2^{-2n}}}} . His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it. [ 10 ] [ 11 ] The report also discussed the possibility to compute hyperbolic coordinate rotation , logarithms and exponential functions with modified CORDIC algorithms. [ 10 ] [ 11 ] Utilizing CORDIC for multiplication and division was also conceived at this time. [ 11 ] Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD). [ 11 ] [ 14 ] In 1958, Convair finally started to build a demonstration system to solve radar fix –taking problems named CORDIC I , completed in 1960 without Volder, who had left the company already. [ 1 ] [ 11 ] More universal CORDIC II models A (stationary) and B (airborne) were built and tested by Daggett and Harry Schuss in 1962. [ 11 ] [ 15 ] Volder's CORDIC algorithm was first described in public in 1959, [ 1 ] [ 2 ] [ 11 ] [ 13 ] [ 16 ] which caused it to be incorporated into navigation computers by companies including Martin-Orlando , Computer Control , Litton , Kearfott , Lear-Siegler , Sperry , Raytheon , and Collins Radio . [ 11 ] Volder teamed up with Malcolm McMillan to build Athena , a fixed-point desktop calculator utilizing his binary CORDIC algorithm. [ 17 ] The design was introduced to Hewlett-Packard in June 1965, but not accepted. [ 17 ] Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM [ 18 ] ) had proposed as pseudo-multiplication and pseudo-division in 1961. [ 18 ] [ 19 ] Meggitt's method also suggested the use of base 10 [ 18 ] rather than base 2 , as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, [ 20 ] [ 19 ] built by and conceptually derived from Thomas E. Osborne 's prototypical Green Machine , a four-function, floating-point desktop calculator he had completed in DTL logic [ 17 ] in December 1964. [ 21 ] This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year. [ 17 ] [ 21 ] [ 22 ] [ 23 ] When Wang Laboratories found that the HP 9100A used an approach similar to the factor combining method in their earlier LOCI-1 [ 24 ] (September 1964) and LOCI-2 (January 1965) [ 25 ] [ 26 ] Logarithmic Computing Instrument desktop calculators, [ 27 ] they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang 's patents in 1968. [ 19 ] [ 28 ] [ 29 ] [ 30 ] John Stephen Walther at Hewlett-Packard generalized the algorithm into the Unified CORDIC algorithm in 1971, allowing it to calculate hyperbolic functions , natural exponentials , natural logarithms , multiplications , divisions , and square roots . [ 31 ] [ 3 ] [ 4 ] [ 32 ] The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code. [ 28 ] This development resulted in the first scientific handheld calculator , the HP-35 in 1972. [ 28 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019. [ 5 ] [ 6 ] [ 38 ] [ 39 ] [ 40 ] Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC. [ 5 ] Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973 [ 16 ] [ 13 ] [ 41 ] [ 42 ] [ 43 ] and it was found only later that Hewlett-Packard had implemented it in 1966 already. [ 11 ] [ 13 ] [ 20 ] [ 28 ] Decimal CORDIC became widely used in pocket calculators , [ 13 ] most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed. CORDIC has been implemented in the ARM-based STM32G4 , Intel 8087 , [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] 80287 , [ 47 ] [ 48 ] 80387 [ 47 ] [ 48 ] up to the 80486 [ 43 ] coprocessor series as well as in the Motorola 68881 [ 43 ] [ 44 ] and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system. CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition , QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing , communication systems , robotics and 3D graphics apart from general scientific and technical computation. [ 49 ] [ 50 ] The algorithm was used in the navigational system of the Apollo program 's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module . [ 51 ] [ 52 ] CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication. [ 53 ] CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC ). In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for. On the other hand, when a hardware multiplier is available ( e.g. , in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementations. [ citation needed ] The STM32G4 , STM32U5 and STM32H5 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries. [ 54 ] Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision. [ 55 ] Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks. The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error. [ 56 ] Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error. Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log 10 , natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC. CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle β {\displaystyle \beta } , the y or x coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, one would start with the vector v 0 {\displaystyle v_{0}} : In the first iteration, this vector is rotated 45° counterclockwise to get the vector v 1 {\displaystyle v_{1}} . Successive iterations rotate the vector in one or the other direction by size-decreasing steps, until the desired angle has been achieved. Each step angle is γ i = arctan ⁡ ( 2 − i ) {\displaystyle \gamma _{i}=\arctan {(2^{-i})}} for i = 0 , 1 , 2 , … {\displaystyle i=0,1,2,\dots } . More formally, every iteration calculates a rotation, which is performed by multiplying the vector v i {\displaystyle v_{i}} with the rotation matrix R i {\displaystyle R_{i}} : The rotation matrix is given by Using the trigonometric identity : the cosine factor can be taken out to give: The expression for the rotated vector v i + 1 = R i v i {\displaystyle v_{i+1}=R_{i}v_{i}} then becomes: where x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} are the components of v i {\displaystyle v_{i}} . Setting the angle γ i {\displaystyle \gamma _{i}} for each iteration such that tan ⁡ ( γ i ) = ± 2 − i {\displaystyle \tan(\gamma _{i})=\pm 2^{-i}} still yields a series that converges to every possible output value. The multiplication with the tangent can therefore be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift . The expression then becomes: and σ i {\displaystyle \sigma _{i}} is used to determine the direction of the rotation: if the angle γ i {\displaystyle \gamma _{i}} is positive, then σ i {\displaystyle \sigma _{i}} is +1, otherwise it is −1. The following trigonometric identity can be used to replace the cosine: giving this multiplier for each iteration: The K i {\displaystyle K_{i}} factors can then be taken out of the iterative process and applied all at once afterwards with a scaling factor K ( n ) {\displaystyle K(n)} : which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling v 0 {\displaystyle v_{0}} and hence saving a multiplication. Additionally, it can be noted that [ 43 ] to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for K {\displaystyle K} altogether, resulting in a processing gain A {\displaystyle A} : [ 57 ] After a sufficient number of iterations, the vector's angle will be close to the wanted angle β {\displaystyle \beta } . For most ordinary purposes, 40 iterations ( n = 40) are sufficient to obtain the correct result to the 10th decimal place. The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of σ {\displaystyle \sigma } ). This is done by keeping track of how much the angle was rotated at each iteration and subtracting that from the wanted angle; then in order to get closer to the wanted angle β {\displaystyle \beta } , if β n + 1 {\displaystyle \beta _{n+1}} is positive, the rotation is clockwise, otherwise it is negative and the rotation is counterclockwise: The values of γ n {\displaystyle \gamma _{n}} must also be precomputed and stored. For small angles it can be approximated with arctan ⁡ ( γ n ) ≈ γ n {\displaystyle \arctan(\gamma _{n})\approx \gamma _{n}} to reduce the table size. As can be seen in the illustration above, the sine of the angle β {\displaystyle \beta } is the y coordinate of the final vector v n , {\displaystyle v_{n},} while the x coordinate is the cosine value. The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the x axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on β i {\displaystyle \beta _{i}} being positive or negative. The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose x coordinate is positive whereas the y coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the x axis (and therefore reducing the y coordinate to zero). At each step, the value of y determines the direction of the rotation. The final value of β i {\displaystyle \beta _{i}} contains the total angle of rotation. The final value of x will be the magnitude of the original vector scaled by K . So, an obvious use of the vectoring mode is the transformation from rectangular to polar coordinates. In Java the Math class has a scalb(double x,int scale) method to perform such a shift, [ 58 ] C has the ldexp function, [ 59 ] and the x86 class of processors have the fscale floating point operation. [ 60 ] The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters . In two of the publications by Vladimir Baykov, [ 61 ] [ 62 ] it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes every time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared: i = 0 , 0 , 1 , 1 , 2 , 2 … {\displaystyle i=0,0,1,1,2,2\dots } . Whereas with ordinary iterations: i = 0 , 1 , 2 … {\displaystyle i=0,1,2\dots } . The double iteration method guarantees the convergence of the method throughout the valid range of argument changes. The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix R {\displaystyle R} showed [ 63 ] that for the functions sine, cosine, arctangent, it is enough to perform R − 1 {\displaystyle R-1} iterations for each value of i (i = 0 or 1 to n, where n is the number of digits), i.e. for each digit of the result. For the natural logarithm, exponential, hyperbolic sine, cosine and arctangent, R {\displaystyle R} iterations should be performed for each value i {\displaystyle i} . For the functions arcsine and arccosine, two R − 1 {\displaystyle R-1} iterations should be performed for each number digit, i.e. for each value of i {\displaystyle i} . [ 63 ] For inverse hyperbolic sine and arcosine functions, the number of iterations will be 2 R {\displaystyle 2R} for each i {\displaystyle i} , that is, for each result digit. CORDIC is part of the class of "shift-and-add" algorithms , as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm , which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle x {\displaystyle x} (in radians) by computing the exponential of 0 + i x {\displaystyle 0+ix} , which is cis ⁡ ( x ) = cos ⁡ ( x ) + i sin ⁡ ( x ) {\displaystyle \operatorname {cis} (x)=\cos(x)+i\sin(x)} . The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor ( K ).
https://en.wikipedia.org/wiki/Redundant_CORDIC
A redundant array of independent memory ( RAIM ) is a design feature found in certain computers ' main random access memory . [ 1 ] RAIM utilizes additional memory modules and striping algorithms to protect against the failure of any particular module and keep the memory system operating continuously. RAIM is similar in concept to a redundant array of independent disks (RAID), which protects against the failure of a disk drive , but in the case of memory it supports several DRAM device chipkills and entire memory channel failures. RAIM is much more robust than parity checking and ECC memory technologies which cannot protect against many varieties of memory failures. On July 22, 2010, IBM introduced the first high end computer server featuring RAIM, the zEnterprise 196 . Each z196 machine contains up to 3 TB (usable) of RAIM-protected main memory. In 2011 the business class model z114 was introduced also supporting RAIM. The formal announcement letter offered some additional information regarding the implementation: ... IBM's most robust error correction to date can be found in the memory subsystem. A new redundant array of independent memory (RAIM) technology is being introduced to provide protection at the dynamic random access memory (DRAM), dual inline memory module (DIMM), and memory channel level. Three full DRAM failures per rank can be corrected. DIMM level failures, including components such as the controller application specific integrated circuit (ASIC), the power regulators, the clocks, and the board, can be corrected. Memory channel failures such as signal lines, control lines, and drivers/receivers on the MCM can be corrected. Upstream and downstream data signals can be spared using two spare wires on both the upstream and downstream paths. One of these signals can be used to spare a clock signal line (one upstream and one downstream). Together these improvements are designed to deliver System z's most resilient memory subsystem to date. [ 2 ]
https://en.wikipedia.org/wiki/Redundant_array_of_independent_memory
In mathematical logic , a redundant proof is a proof that has a subset that is a shorter proof of the same result. In other words, a proof is redundant if it has more proof steps than are actually necessary to prove the result. Formally, a proof ψ {\displaystyle \psi } of κ {\displaystyle \kappa } is considered redundant if there exists another proof ψ ′ {\displaystyle \psi ^{\prime }} of κ ′ {\displaystyle \kappa ^{\prime }} such that κ ′ ⊆ κ {\displaystyle \kappa ^{\prime }\subseteq \kappa } (i.e. κ ′ subsumes κ {\displaystyle \kappa ^{\prime }\;{\text{subsumes}}\;\kappa } ) and | ψ ′ | < | ψ | {\displaystyle |\psi ^{\prime }|<|\psi |} where | φ | {\displaystyle |\varphi |} is the number of nodes in φ {\displaystyle \varphi } . [ 1 ] A proof containing a subproof of the shapes (here omitted pivots [ further explanation needed ] indicate that the resolvents must be uniquely defined) is locally redundant. Indeed, both of these subproofs can be equivalently replaced by the shorter subproof η ⊙ ( η 1 ⊙ η 2 ) {\displaystyle \eta \odot (\eta _{1}\odot \eta _{2})} . In the case of local redundancy, the pairs of redundant inferences having the same pivot occur close to each other in the proof. However, redundant inferences can also occur far apart in the proof. The following definition generalizes local redundancy by considering inferences with the same pivot that occur within different contexts. We write ψ [ η ] {\displaystyle \psi \left[\eta \right]} to denote a proof-context ψ [ − ] {\displaystyle \psi \left[-\right]} with a single placeholder replaced by the subproof η {\displaystyle \eta } . A proof is potentially (globally) redundant. Furthermore, it is (globally) redundant if it can be rewritten to one of the following shorter proofs: The proof is locally redundant as it is an instance of the first pattern in the definition ( ( η ⊙ p η 1 ) ⊙ η 3 ) ⊙ ( ( η ⊙ p η 2 ) ⊙ η 3 ) . {\displaystyle ((\eta \odot _{p}\eta _{1})\odot \eta _{3})\odot ((\eta \odot _{p}\eta _{2})\odot \eta _{3}).} But it is not globally redundant because the replacement terms according to the definition contain ψ 1 [ η 1 ] ⊙ ψ 2 [ η 2 ] {\displaystyle \psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}]} in all the cases and ψ 1 [ η 1 ] ⊙ ψ 2 [ η 2 ] = ( η 1 ⊙ η 3 ) ⊙ ( η 2 ⊙ η 3 ) {\displaystyle \psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}]=(\eta _{1}\odot \eta _{3})\odot (\eta _{2}\odot \eta _{3})} does not correspond to a proof. In particular, neither η 1 {\displaystyle \eta _{1}} nor η 2 {\displaystyle \eta _{2}} can be resolved with η 3 {\displaystyle \eta _{3}} , as they do not contain the literal q {\displaystyle q} . The second pattern of potentially globally redundant proofs appearing in global redundancy definition is related to the well-known [ further explanation needed ] notion of regularity [ further explanation needed ] . Informally, a proof is irregular if there is a path from a node to the root of the proof such that a literal is used more than once as a pivot in this path.
https://en.wikipedia.org/wiki/Redundant_proof
Redux is the generic name of a family of phenol–formaldehyde / polyvinyl–formal adhesives developed by Aero Research Limited (ARL) at Duxford , UK, in the 1940s, subsequently produced by Ciba (ARL) . The brand name is now also used for a range of epoxy and bismaleimide adhesives manufactured by Hexcel . The name is a contraction of RE search at DUX ford . Devised at ARL by Dr. Norman de Bruyne and George Newell in 1941 for use in the aircraft industry, the adhesive is used for the bonding of metal-to-metal and metal-to-wood structures. The adhesive system comprises a liquid phenolic resin and a PVF (PolyVinylFormal) thermoplastic powder. The first formulation available was Redux Liquid E/Formvar , comprising a phenolic liquid ( Redux Liquid E ) and a PVF powder ( Formvar ), and after its initial non-aviation related application of bonding clutch plates on Churchill and Cromwell tanks , it was used by de Havilland from 1943 to the early 1960s, on, among other aircraft, the Hornet , the Comet and the derived Nimrod , and the Dove , Heron and Trident . It was also used by Vickers on the Viking and by Chance Vought on the F7U Cutlass . Typically, Redux would be used to affix stiffening stringers and doublers to wing and fuselage panels, the resulting panel being both stronger and lighter than a riveted structure. In the case of the Hornet it was used to join the aluminium lower-wing skin to the wooden upper wing structure, and in the fabrication of the aluminium/wood main wing spar , both forms of composite construction made possible by the advent of Redux. After initially supplying de Havilland only, ARL subsequently produced a refined form of Redux Liquid E/Formvar using a new liquid component known as Redux Liquid K6 , and a finer-grade (smaller particle -size) PVF powder, and this was later made generally available to the wider aircraft industry as Redux Liquid 775/Powder 775 , so-named because it was sold for aircraft use to specification DTD 775 * . Available for general non- aerospace use it was called Redux Liquid K6/Powder C . Redux Liquid 775/Powder 775 was joined in 1954 by the subsequent Redux Film 775 system, used from 1962 by de Havilland (later Hawker Siddeley and subsequently British Aerospace ) on the DH.125 and DH.146 . Other users included Bristol (on the Britannia ), SAAB (on the Lansen & Draken ), Fokker (on the F.27 ), Sud Aviation (on the Alouette II/III ), Breguet and Fairchild , the film-form having the advantage of greater gap-filling ability with no loss of strength over Redux Liquid 775/Powder 775 , allowing for wider tolerances in component-fit, as well as easier handling and use and controlled ratios of the liquid/powder components. Other Redux adhesives available included "Redux 64", a solution of the phenolic liquid and PVF powder, used worldwide for bonding linings to brake shoes, pads and clutches. The Redux range was subsequently expanded to include the current range of adhesives, both in single and two part paste systems and film forms, for both aerospace and industrial uses. * DTD = Directorate of Technical Development To use Redux in its liquid/powder form, a thin film of the phenolic liquid is applied to both mating surfaces and then dusted with or dipped in the PVF powder to give an approximate ratio by weight of 1 part liquid to 2 parts powder. The coated joints are then allowed to stand for between 30 minutes and 72 hours, then the components are brought together under elevated pressure and temperature. The curing process is by condensation and a typical figure for Redux Liquid 775/Powder 775 is 30 minutes at 293 °F (145 °C) under a pressure of 100 psi (690 kPa). This is not critical and variations in curing-time and/or temperature may be used to increase shear and creep strength at temperatures above 140 °F (60 °C). Extending the curing cycle gives benefits in fatigue strength at some cost in the room-temperature peel strength, the practical limit for aluminium alloys being approximately 338 °F (170 °C) for one hour, due to the possibility of affecting the alloy's mechanical properties. Strength of bonds to materials other than aluminium: Tensile shear of 0.5 in (13 mm) lap joints at room temperature: 1 = HK31A-H24 2 = ICI Titanium 130
https://en.wikipedia.org/wiki/Redux_(adhesive)
In computer science , a red–black tree is a self-balancing binary search tree data structure noted for fast storage and retrieval of ordered information. The nodes in a red-black tree hold an extra "color" bit, often drawn as red and black, which help ensure that the tree is always approximately balanced. [ 1 ] When the tree is modified, the new tree is rearranged and "repainted" to restore the coloring properties that constrain how unbalanced the tree can become in the worst case. The properties are designed such that this rearranging and recoloring can be performed efficiently. The (re-)balancing is not perfect, but guarantees searching in O ( log ⁡ n ) {\displaystyle O(\log n)} time, where n {\displaystyle n} is the number of entries in the tree. The insert and delete operations, along with tree rearrangement and recoloring, also execute in O ( log ⁡ n ) {\displaystyle O(\log n)} time. [ 2 ] [ 3 ] Tracking the color of each node requires only one bit of information per node because there are only two colors (due to memory alignment present in some programming languages, the real memory consumption may differ). The tree does not contain any other data specific to it being a red–black tree, so its memory footprint is almost identical to that of a classic (uncolored) binary search tree . In some cases, the added bit of information can be stored at no added memory cost. In 1972, Rudolf Bayer [ 4 ] invented a data structure that was a special order-4 case of a B-tree . These trees maintained all paths from root to leaf with the same number of nodes, creating perfectly balanced trees. However, they were not binary search trees. Bayer called them a "symmetric binary B-tree" in his paper and later they became popular as 2–3–4 trees or even 2–3 trees. [ 5 ] In a 1978 paper, "A Dichromatic Framework for Balanced Trees", [ 6 ] Leonidas J. Guibas and Robert Sedgewick derived the red–black tree from the symmetric binary B-tree. [ 7 ] The color "red" was chosen because it was the best-looking color produced by the color laser printer available to the authors while working at Xerox PARC . [ 8 ] Another response from Guibas states that it was because of the red and black pens available to them to draw the trees. [ 9 ] In 1993, Arne Andersson introduced the idea of a right leaning tree to simplify insert and delete operations. [ 10 ] In 1999, Chris Okasaki showed how to make the insert operation purely functional. Its balance function needed to take care of only 4 unbalanced cases and one default balanced case. [ 11 ] The original algorithm used 8 unbalanced cases, but Cormen et al. (2001) reduced that to 6 unbalanced cases. [ 1 ] Sedgewick showed that the insert operation can be implemented in just 46 lines of Java . [ 12 ] [ 13 ] In 2008, Sedgewick proposed the left-leaning red–black tree , leveraging Andersson’s idea that simplified the insert and delete operations. Sedgewick originally allowed nodes whose two children are red, making his trees more like 2–3–4 trees, but later this restriction was added, making new trees more like 2–3 trees. Sedgewick implemented the insert algorithm in just 33 lines, significantly shortening his original 46 lines of code. [ 14 ] [ 15 ] The black depth of a node is defined as the number of black nodes from the root to that node (i.e. the number of black ancestors). The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4 , is constant (alternatively, it could be defined as the black depth of any leaf node). [ 16 ] : 154–165 The black height of a node is the black height of the subtree rooted by it. In this article, the black height of a null node shall be set to 0, because its subtree is empty as suggested by the example figure, and its tree height is also 0. In addition to the requirements imposed on a binary search tree the following must be satisfied by a red–black tree: [ 17 ] Some authors, e.g. Cormen & al., [ 17 ] claim "the root is black" as fifth requirement; but not Mehlhorn & Sanders [ 16 ] or Sedgewick & Wayne. [ 15 ] : 432–447 Since the root can always be changed from red to black, this rule has little effect on analysis. This article also omits it, because it slightly disturbs the recursive algorithms and proofs. As an example, every perfect binary tree that consists only of black nodes is a red–black tree. The read-only operations, such as search or tree traversal, do not affect any of the requirements. In contrast, the modifying operations insert and delete easily maintain requirements 1 and 2, but with respect to the other requirements some extra effort must be made, to avoid introducing a violation of requirement 3, called a red-violation , or of requirement 4, called a black-violation . The requirements enforce a critical property of red–black trees: the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf . The result is that the tree is height-balanced . Since operations such as inserting, deleting, and finding values require worst-case time proportional to the height h {\displaystyle h} of the tree, this upper bound on the height allows red–black trees to be efficient in the worst case, namely logarithmic in the number n {\displaystyle n} of entries, i.e. h ∈ O ( log ⁡ n ) {\displaystyle h\in O(\log n)} (a property which is shared by all self-balancing trees, e.g., AVL tree or B-tree , but not the ordinary binary search trees ). For a mathematical proof see section Proof of bounds . Red–black trees, like all binary search trees , allow quite efficient sequential access (e.g. in-order traversal , that is: in the order Left–Root–Right) of their elements. But they support also asymptotically optimal direct access via a traversal from root to leaf, resulting in O ( log ⁡ n ) {\displaystyle O(\log n)} search time. Red–black trees are similar in structure to 2–3–4 trees , which are B-trees of order 4. [ 18 ] In 2–3–4 trees, each node can contain between 1 and 3 values and have between 2 and 4 children. These 2–3–4 nodes correspond to black node – red children groups in red-black trees, as shown in figure 1. It is not a 1-to-1 correspondence , because 3-nodes have two equivalent representations: the red child may lie either to the left or right. The left-leaning red-black tree variant makes this relationship exactly 1-to-1, by only allowing the left child representation. Since every 2–3–4 node has a corresponding black node, invariant 4 of red-black trees is equivalent to saying that the leaves of a 2–3–4 tree all lie at the same level. Despite structural similarities, operations on red–black trees are more economical than B-trees. B-trees require management of vectors of variable length, whereas red-black trees are simply binary trees. [ 19 ] Red–black trees offer worst-case guarantees for insertion time, deletion time, and search time. Not only does this make them valuable in time-sensitive applications such as real-time applications , but it makes them valuable building blocks in other data structures that provide worst-case guarantees. For example, many data structures used in computational geometry are based on red–black trees, and the Completely Fair Scheduler and epoll system call of the Linux kernel use red–black trees. [ 20 ] [ 21 ] The AVL tree is another structure supporting O ( log ⁡ n ) {\displaystyle O(\log n)} search, insertion, and removal. AVL trees can be colored red–black, and thus are a subset of red-black trees. The worst-case height of AVL is 0.720 times the worst-case height of red-black trees, so AVL trees are more rigidly balanced. The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910. [ 22 ] The performance of WAVL trees lie in between AVL trees and red-black trees. [ citation needed ] Red–black trees are also particularly valuable in functional programming , where they are one of the most common persistent data structures , used to construct associative arrays and sets that can retain previous versions after mutations. The persistent version of red–black trees requires O ( log ⁡ n ) {\displaystyle O(\log n)} space for each insertion or deletion, in addition to time. For every 2–3–4 tree , there are corresponding red–black trees with data elements in the same order. The insertion and deletion operations on 2–3–4 trees are also equivalent to color-flipping and rotations in red–black trees. This makes 2–3–4 trees an important tool for understanding the logic behind red–black trees, and this is why many introductory algorithm texts introduce 2–3–4 trees just before red–black trees, even though 2–3–4 trees are not often used in practice. In 2008, Sedgewick introduced a simpler version of the red–black tree called the left-leaning red–black tree [ 23 ] by eliminating a previously unspecified degree of freedom in the implementation. The LLRB maintains an additional invariant that all red links must lean left except during inserts and deletes. Red–black trees can be made isometric to either 2–3 trees , [ 24 ] or 2–3–4 trees, [ 23 ] for any sequence of operations. The 2–3–4 tree isometry was described in 1978 by Sedgewick. [ 6 ] With 2–3–4 trees, the isometry is resolved by a "color flip," corresponding to a split, in which the red color of two children nodes leaves the children and moves to the parent node. The original description of the tango tree , a type of tree optimised for fast searches, specifically uses red–black trees as part of its data structure. [ 25 ] As of Java 8, the HashMap has been modified such that instead of using a LinkedList to store different elements with colliding hashcodes , a red–black tree is used. This results in the improvement of time complexity of searching such an element from O ( m ) {\displaystyle O(m)} to O ( log ⁡ m ) {\displaystyle O(\log m)} where m {\displaystyle m} is the number of elements with colliding hashes. [ 26 ] The read-only operations, such as search or tree traversal, on a red–black tree require no modification from those used for binary search trees , because every red–black tree is a special case of a simple binary search tree. However, the immediate result of an insertion or removal may violate the properties of a red–black tree, the restoration of which is called rebalancing so that red–black trees become self-balancing. Rebalancing (i.e. color changes) has a worst-case time complexity of O ( log ⁡ n ) {\displaystyle O(\log n)} and average of O ( 1 ) {\displaystyle O(1)} , [ 27 ] : 310 [ 16 ] : 158 though these are very quick in practice. Additionally, rebalancing takes no more than three tree rotations [ 28 ] (two for insertion). This is an example implementation of insert and remove in C . Below are the data structures and the rotate_subtree helper function used in the insert and remove examples. The proposal breaks down both insertion and removal (not mentioning some very simple cases) into six constellations of nodes, edges, and colors, which are called cases. The proposal contains, for both insertion and removal, exactly one case that advances one black level closer to the root and loops, the other five cases rebalance the tree of their own. The more complicated cases are pictured in a diagram. Insertion begins by placing the new (non-NULL) node, say N , at the position in the binary search tree of a NULL node whose in-order predecessor’s key compares less than the new node’s key, which in turn compares less than the key of its in-order successor. (Frequently, this positioning is the result of a search within the tree immediately preceding the insert operation and consists of a node P together with a direction dir with P->child[dir] == NULL .) The newly inserted node is temporarily colored red so that all paths contain the same number of black nodes as before. But if its parent, say P , is also red then this action introduces a red-violation . The rebalancing loop of the insert operation has the following invariants : The current node’s parent P is black, so requirement 3 holds. Requirement 4 holds also according to the loop invariant . If both the parent P and the uncle U are red, then both of them can be repainted black and the grandparent G becomes red for maintaining requirement 4 . Since any path through the parent or uncle must pass through the grandparent, the number of black nodes on these paths has not changed. However, the grandparent G may now violate requirement 3, if it has a red parent. After relabeling G to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 2 tree levels) higher. Insert case 2 has been executed for h − 1 2 {\displaystyle {\tfrac {h-1}{2}}} times and the total height of the tree has increased by 1, now being h . The current node N is the (red) root of the tree, and all RB-properties are satisfied. The parent P is red and the root. Because N is also red, requirement 3 is violated. But after switching P ’s color the tree is in RB-shape. The black height of the tree increases by 1. The parent P is red but the uncle U is black. The ultimate goal is to rotate the parent node P to the grandparent position, but this will not work if N is an "inner" grandchild of G (i.e., if N is the left child of the right child of G or the right child of the left child of G ). A dir -rotation at P switches the roles of the current node N and its parent P . The rotation adds paths through N (those in the subtree labeled 2 , see diagram) and removes paths through P (those in the subtree labeled 4 ). But both P and N are red, so requirement 4 is preserved. Requirement 3 is restored in case 6. The current node N is now certain to be an "outer" grandchild of G (left of left child or right of right child). Now (1-dir) -rotate at G , putting P in place of G and making P the parent of N and G . G is black and its former child P is red, since requirement 3 was violated. After switching the colors of P and G the resulting tree satisfies requirement 3. Requirement 4 also remains satisfied, since all paths that went through the black G now go through the black P . Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place . The complex case is when N is not the root, colored black and has no proper child (⇔ only NULL children). In the first iteration, N is replaced by NULL. The rebalancing loop of the delete operation has the following invariant : The current node N is the new root. One black node has been removed from every path, so the RB-properties are preserved. The black height of the tree decreases by 1. P , S , and S ’s children are black. After painting S red all paths passing through S , which are precisely those paths not passing through N , have one less black node. Now all paths in the subtree rooted by P have the same number of black nodes, but one fewer than the paths that do not pass through P , so requirement 4 may still be violated. After relabeling P to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 1 tree level) higher. The sibling S is red, so P and the nephews C and D have to be black. A dir -rotation at P turns S into N ’s grandparent. Then after reversing the colors of P and S , the path through N is still short one black node. But N now has a red parent P and after the reassignment a black sibling S , so the transformations in cases 4, 5, or 6 are able to restore the RB-shape. The sibling S and S ’s children are black, but P is red. Exchanging the colors of S and P does not affect the number of black nodes on paths going through S , but it does add one to the number of black nodes on paths going through N , making up for the deleted black node on those paths. The sibling S is black, S ’s close child C is red, and S ’s distant child D is black. After a (1-dir) -rotation at S the nephew C becomes S ’s parent and N ’s new sibling. The colors of S and C are exchanged. All paths still have the same number of black nodes, but now N has a black sibling whose distant child is red, so the constellation is fit for case D6. Neither N nor its parent P are affected by this transformation, and P may be red or black ( in the diagram). The sibling S is black, S ’s distant child D is red. After a dir -rotation at P the sibling S becomes the parent of P and S ’s distant child D . The colors of P and S are exchanged, and D is made black. The whole subtree still has the same color at its root S , namely either red or black ( in the diagram), which refers to the same color both before and after the transformation. This way requirement 3 is preserved. The paths in the subtree not passing through N (i.o.w. passing through D and node 3 in the diagram) pass through the same number of black nodes as before, but N now has one additional black ancestor: either P has become black, or it was black and S was added as a black grandparent. Thus, the paths passing through N pass through one additional black node, so that requirement 4 is restored and the total tree is in RB-shape. Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place . For h ∈ N {\displaystyle h\in \mathbb {N} } there is a red–black tree of height h {\displaystyle h} with nodes ( ⌊ ⌋ {\displaystyle \lfloor \,\rfloor } is the floor function ) and there is no red–black tree of this tree height with fewer nodes—therefore it is minimal . Its black height is ⌈ h / 2 ⌉ {\displaystyle \lceil h/2\rceil } (with black root) or for odd h {\displaystyle h} (then with a red root) also ( h − 1 ) / 2 . {\displaystyle (h-1)/2~.} For a red–black tree of a certain height to have minimal number of nodes, it must have exactly one longest path with maximal number of red nodes, to achieve a maximal tree height with a minimal black height. Besides this path all other nodes have to be black. [ 15 ] : 444 Proof sketch If a node is taken off this tree it either loses height or some RB property. The RB tree of height h = 1 {\displaystyle h=1} with red root is minimal. This is in agreement with A minimal RB tree (RB h in figure 2) of height h > 1 {\displaystyle h>1} has a root whose two child subtrees are of different height. The higher child subtree is also a minimal RB tree, RB h –1 , containing also a longest path that defines its height h − 1 {\displaystyle h\!\!-\!\!1} ; it has m h − 1 {\displaystyle m_{h-1}} nodes and the black height ⌊ ( h − 1 ) / 2 ⌋ =: s . {\displaystyle \lfloor (h\!\!-\!\!1)/2\rfloor =:s.} The other subtree is a perfect binary tree of (black) height s {\displaystyle s} having 2 s − 1 = 2 ⌊ ( h − 1 ) / 2 ⌋ − 1 {\displaystyle 2^{s}\!\!-\!\!1=2^{\lfloor (h-1)/2\rfloor }\!\!-\!\!1} black nodes—and no red node. Then the number of nodes is by induction The graph of the function m h {\displaystyle m_{h}} is convex and piecewise linear with breakpoints at ( h = 2 k | m 2 k = 2 ⋅ 2 k − 2 ) {\displaystyle (h=2k\;|\;m_{2k}=2\cdot 2^{k}\!-\!2)} where k ∈ N . {\displaystyle k\in \mathbb {N} .} The function has been tabulated as m h = {\displaystyle m_{h}=} A027383( h –1) for h ≥ 1 {\displaystyle h\geq 1} (sequence A027383 in the OEIS ). The inequality 9 > 8 = 2 3 {\displaystyle 9>8=2^{3}} leads to 3 > 2 3 / 2 {\displaystyle 3>2^{3/2}} , which for odd h {\displaystyle h} leads to So in both, the even and the odd case, h {\displaystyle h} is in the interval with n {\displaystyle n} being the number of nodes. [ 33 ] A red–black tree with n {\displaystyle n} nodes (keys) has tree height h ∈ O ( log ⁡ n ) . {\displaystyle h\in O(\log n).} In addition to the single-element insert, delete and lookup operations, several set operations have been defined on red–black trees: union , intersection and set difference . Then fast bulk operations on insertions or deletions can be implemented based on these set functions. These set operations rely on two helper operations, Split and Join . With the new operations, the implementation of red–black trees can be more efficient and highly-parallelizable. [ 34 ] In order to achieve its time complexities this implementation requires that the root is allowed to be either red or black, and that every node stores its own black height . The join algorithm is as follows: The split algorithm is as follows: The union of two red–black trees t 1 and t 2 representing sets A and B , is a red–black tree t that represents A ∪ B . The following recursive function computes this union: Here, split is presumed to return two trees: one holding the keys less its input key, one holding the greater keys. (The algorithm is non-destructive , but an in-place destructive version exists also.) The algorithm for intersection or difference is similar, but requires the Join2 helper routine that is the same as Join but without the middle key. Based on the new functions for union, intersection or difference, either one key or multiple keys can be inserted to or deleted from the red–black tree. Since Split calls Join but does not deal with the balancing criteria of red–black trees directly, such an implementation is usually called the "join-based" implementation . The complexity of each of union, intersection and difference is O ( m log ⁡ ( n m + 1 ) ) {\displaystyle O\left(m\log \left({n \over m}+1\right)\right)} for two red–black trees of sizes m {\displaystyle m} and n ( ≥ m ) {\displaystyle n(\geq m)} . This complexity is optimal in terms of the number of comparisons. More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth O ( log ⁡ m log ⁡ n ) {\displaystyle O(\log m\log n)} . [ 34 ] When m = 1 {\displaystyle m=1} , the join-based implementation has the same computational directed acyclic graph (DAG) as single-element insertion and deletion if the root of the larger tree is used to split the smaller tree. Parallel algorithms for constructing red–black trees from sorted lists of items can run in constant time or O ( log ⁡ log ⁡ n ) {\displaystyle O(\log \log n)} time, depending on the computer model, if the number of processors available is asymptotically proportional to the number n {\displaystyle n} of items where n → ∞ {\displaystyle n\to \infty } . Fast search, insertion, and deletion parallel algorithms are also known. [ 35 ] The join-based algorithms for red–black trees are parallel for bulk operations, including union, intersection, construction, filter, map-reduce, and so on. Basic operations like insertion, removal or update can be parallelised by defining operations that process bulks of multiple elements. It is also possible to process bulks with several basic operations, for example bulks may contain elements to insert and also elements to remove from the tree. The algorithms for bulk operations aren't just applicable to the red–black tree, but can be adapted to other sorted sequence data structures also, like the 2–3 tree , 2–3–4 tree and (a,b)-tree . In the following different algorithms for bulk insert will be explained, but the same algorithms can also be applied to removal and update. Bulk insert is an operation that inserts each element of a sequence I {\displaystyle I} into a tree T {\displaystyle T} . This approach can be applied to every sorted sequence data structure that supports efficient join- and split-operations. [ 36 ] The general idea is to split I and T in multiple parts and perform the insertions on these parts in parallel. Note that in Step 3 the constraints for splitting I assure that in Step 5 the trees can be joined again and the resulting sequence is sorted. The pseudo code shows a simple divide-and-conquer implementation of the join-based algorithm for bulk-insert. Both recursive calls can be executed in parallel. The join operation used here differs from the version explained in this article, instead join2 is used, which misses the second parameter k. Sorting I is not considered in this analysis. This can be improved by using parallel algorithms for splitting and joining. In this case the execution time is ∈ O ( log ⁡ | T | + | I | k log ⁡ | T | ) {\displaystyle \in O\left(\log |T|+{\frac {|I|}{k}}\log |T|\right)} . [ 37 ] Another method of parallelizing bulk operations is to use a pipelining approach. [ 38 ] This can be done by breaking the task of processing a basic operation up into a sequence of subtasks. For multiple basic operations the subtasks can be processed in parallel by assigning each subtask to a separate processor. Sorting I is not considered in this analysis. Also, | I | {\displaystyle |I|} is assumed to be smaller than | T | {\displaystyle |T|} , otherwise it would be more efficient to construct the resulting tree from scratch.
https://en.wikipedia.org/wiki/Red–black_tree
In mathematics , Reeb sphere theorem , named after Georges Reeb , states that A singularity of a foliation F is of Morse type if in its small neighborhood all leaves of the foliation are level sets of a Morse function , being the singularity a critical point of the function. The singularity is a center if it is a local extremum of the function; otherwise, the singularity is a saddle . The number of centers c and the number of saddles s {\displaystyle s} , specifically c − s {\displaystyle c-s} , is tightly connected with the manifold topology. We denote ind ⁡ p = min ( k , n − k ) {\displaystyle \operatorname {ind} p=\min(k,n-k)} , the index of a singularity p {\displaystyle p} , where k is the index of the corresponding critical point of a Morse function. In particular, a center has index 0, index of a saddle is at least 1. A Morse foliation F on a manifold M is a singular transversely oriented codimension one foliation of class C 2 {\displaystyle C^{2}} with isolated singularities such that: This is the case c > s = 0 {\displaystyle c>s=0} , the case without saddles. Theorem: [ 1 ] Let M n {\displaystyle M^{n}} be a closed oriented connected manifold of dimension n ≥ 2 {\displaystyle n\geq 2} . Assume that M n {\displaystyle M^{n}} admits a C 1 {\displaystyle C^{1}} -transversely oriented codimension one foliation F {\displaystyle F} with a non empty set of singularities all of them centers. Then the singular set of F {\displaystyle F} consists of two points and M n {\displaystyle M^{n}} is homeomorphic to the sphere S n {\displaystyle S^{n}} . It is a consequence of the Reeb stability theorem . More general case is c > s ≥ 0. {\displaystyle c>s\geq 0.} In 1978, Edward Wagneur generalized the Reeb sphere theorem to Morse foliations with saddles. He showed that the number of centers cannot be too much as compared with the number of saddles, notably, c ≤ s + 2 {\displaystyle c\leq s+2} . So there are exactly two cases when c > s {\displaystyle c>s} : He obtained a description of the manifold admitting a foliation with singularities that satisfy (1). Theorem: [ 2 ] Let M n {\displaystyle M^{n}} be a compact connected manifold admitting a Morse foliation F {\displaystyle F} with c {\displaystyle c} centers and s {\displaystyle s} saddles. Then c ≤ s + 2 {\displaystyle c\leq s+2} . In case c = s + 2 {\displaystyle c=s+2} , Finally, in 2008, César Camacho and Bruno Scardua considered the case (2), c = s + 1 {\displaystyle c=s+1} . This is possible in a small number of low dimensions. Theorem: [ 3 ] Let M n {\displaystyle M^{n}} be a compact connected manifold and F {\displaystyle F} a Morse foliation on M {\displaystyle M} . If s = c + 1 {\displaystyle s=c+1} , then
https://en.wikipedia.org/wiki/Reeb_sphere_theorem
Reed's law is the assertion of David P. Reed that the utility of large networks , particularly social networks , can scale exponentially with the size of the network. [ 1 ] The reason for this is that the number of possible sub-groups of network participants is 2 N − N − 1, where N is the number of participants. This grows much more rapidly than either so that even if the utility of groups available to be joined is very small on a per-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system. Given a set A of N people, it has 2 N possible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element of A one of two possibilities: whether to include that element, or not. However, this includes the (one) empty set, and N singletons , which are not properly subgroups. So 2 N − N − 1 subsets remain, which is exponential, like 2 N . From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4): Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generate network effects that dominate the overall economics of the system. [ 2 ] Other analysts of network value functions, including Andrew Odlyzko , have argued that both Reed's Law and Metcalfe's Law [ 3 ] overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research around Dunbar's number implies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law.
https://en.wikipedia.org/wiki/Reed's_law
A reed is part of a weaving loom , and resembles a comb or a frame with many vertical slits. [ 1 ] It is used to separate and space the warp threads, to guide the shuttle 's motion across the loom, and to push the weft threads into place. [ 2 ] [ 3 ] [ 1 ] In most floor looms with, the reed is securely held by the beater . [ 1 ] Floor looms and mechanized looms both use a beater with a reed, whereas Inkle weaving and tablet weaving do not use reeds. Modern reeds are made by placing flattened strips of wire (made of carbon or stainless steel [ 4 ] ) between two half round ribs of wood, and binding the whole together with tarred string. [ 1 ] Historically, reeds were made of reed or split cane . [ 2 ] [ 5 ] The split cane was then bound between ribs of wood in the same manner as wire is now. In 1738, John Kay replaced split cane with flattened iron or brass wire, and the change was quickly adopted. To make a reed, wire is flattened to a uniform thickness by passing it between rollers. The flat wire is then straightened, given rounded edges, and filed smooth. The final step is to cut the wire to the correct length and assemble. The tarred cord that binds the reed together is wrapped around each set of wooden ribs and between the dents to hold the ribs together and at the correct spacing. [ 5 ] The length of the metal wire varies depending on the type of fabric and the type of loom being used. For a machine-powered cotton loom, the metal wires are commonly 3.5 inches (89 mm) long. [ 6 ] For hand-powered floor looms, around 4 inches (100 mm) is common. Both the wires and the slots in the reed are known as dents [ 7 ] (namely, teeth). [ 5 ] The warp threads pass through the dents after going through the heddles and before becoming woven cloth. [ 1 ] The number of dents per inch (or per cm or per 10 cm) indicates the number of gaps in the reed per linear width. The number of warp thread ends by weaving width determines the fineness of the cloth. [ 3 ] One or more warp threads may pass through each dent. The number of warp threads that go through each dent depends on the warp and the desired characteristics of the final fabric, and it is possible that the number of threads in each dent is not constant for a whole warp. [ 6 ] The number of threads per dent might not be constant if the weaver alternates 2 and three threads per dent, in order to get a number of ends per inch that is 2.5 times the number of dents per inch, or if the thickness of the warp threads were to change at that point, and the fabric to have a thicker or thinner section. One thread per dent is most common for coarse work. However for finer work (20 or more ends per inch), two or more threads are put through each dent. [ 8 ] Threads can be doubled in every other space, so that a reed with 10 dents per inch could give 15 ends per inch, or 20 if the threads were simply doubled. Also, threads can be put in every other dent so as to make a cloth with 6 ends per inch from a reed with 12 dents per inch. [ 9 ] Putting more than one thread through each dent reduces friction and the number of reeds that one weaver needs, and is used in weaving mills. [ 8 ] If too many threads are put through one dent there may be reed marks left in the fabric, especially in linen and cotton . [ 9 ] For cotton fabrics, reeds typically have between 6 and 90 dents per inch. [ 5 ] When the reed has a very high number of dents per inch, it may contain two offset rows of wires. This minimizes friction between the dents and warp threads and prevents loose fibers from twisting and blocking the shed . [ 5 ] Handweaving looms (including floor and table looms) use interchangeable reeds, where the reeds can vary in width and dents per inch. This allows the same loom to be used for making both very fine and very coarse fabric, as well as weaving threads at dramatically different densities. [ 10 ] The width of the reed sets the maximum width of the warp. [ 4 ] Common reed sizes for the hand-weaver are 6, 8, 10, 12, or 15 dents per inch, although sizes between 5 and 24 are not uncommon. [ 9 ] A reed with a larger number of dents per inch is generally used to weave finer fabric with a larger number of ends per inch . Because it is used to beat the weft into place, the reed regulates the distance between threads or groups of threads. Sleying is the term used for pulling the warp threads through the reed, which happens during the warping process (putting a warp on the loom). Sleying is done by inserting a reed hook through the reed, hooking the warp threads and then pulling them through the dent. The warp threads are taken in the order they come from the heddles , so as to avoid crossing threads. [ 6 ] [ 11 ] If the threads cross, the shed will not open correctly when weaving begins. [ 11 ] In Emilia-Romagna , Italy wooden reeds are still used for the traditional making of garganelli and maccheroni al pèttine ( macaroni on reed ). A small square of egg fresh pasta is cut, rolled on a stick and pressed on a wooden reed. With this culinary technique, the pasta is ridged around the circumference; extruded pasta could only have longitudinal ridges. These ridges help the pasta "hold" the dressings like bolognese sauce better than it would without ridges or with longitudinal ones.
https://en.wikipedia.org/wiki/Reed_(weaving)
A reedbed or reed bed is a natural habitat found in floodplains , waterlogged depressions and estuaries . Reedbeds are part of a succession from young reeds colonising open water or wet ground through a gradation of increasingly dry ground. As reedbeds age, they build up a considerable litter layer that eventually rises above the water level and that ultimately provides opportunities in the form of new areas for larger terrestrial plants such as shrubs and trees to colonise. [ 1 ] Artificial reedbeds are used to remove pollutants from greywater , and are also called constructed wetlands . [ 2 ] Reedbeds vary in the species that they can support, depending upon water levels within the wetland system, climate, seasonal variations, and the nutrient status and salinity of the water. Reed swamps have 20 cm or more of surface water during the summer and often have high invertebrate and bird species use. Reed fens have water levels at or below the surface during the summer and are often more botanically complex. Reeds and similar plants do not generally grow in very acidic water. In these situations, reedbeds are replaced by bogs and vegetation such as poor fen . Although common reeds are characteristic of reedbeds, not all vegetation dominated by this species is characteristic of reedbeds. It also commonly occurs in unmanaged, damp grassland and as an understorey in certain types of damp woodland . Most European reedbeds mainly comprise common reed ( Phragmites australis ) but also include many other tall monocotyledons adapted to growing in wet conditions – other grasses such as reed sweet-grass ( Glyceria maxima ), Canary reed-grass ( Phalaris arundinacea ) and small-reed ( Calamagrostis species ), large sedges (species of Carex , Scirpus , Schoenoplectus , Cladium and related genera ), yellow flag iris ( Iris pseudacorus ), reed-mace ("bulrush" – Typha species), water-plantains ( Alisma species), and flowering rush ( Butomus umbellatus ). Many dicotyledons also occur, such as water mint ( Mentha aquatica ), gipsywort ( Lycopus europaeus ), skull-cap ( Scutellaria species), touch-me-not balsam ( Impatiens noli-tangere ), brooklime ( Veronica beccabunga ) and water forget-me-nots ( Myosotis species). Many animals are adapted to living in and around reedbeds. These include mammals such as Eurasian otter , European beaver , water vole , Eurasian harvest mouse and water shrew , and birds such as great bittern , purple heron , European spoonbill , water rail (and other rails ), purple gallinule , marsh harrier , various warblers ( reed warbler , sedge warbler etc.), bearded reedling and reed bunting . Constructed wetlands are artificial swamps (sometimes called reed fields ) using reed or other marshland plants to form part of small-scale sewage treatment systems. Water trickling through the reedbed is cleaned by microorganisms living on the root system and in the litter. These organisms utilize the sewage for growth nutrients , resulting in a clean effluent . The process is very similar to aerobic conventional sewage treatment, as the same organisms are used, except that conventional treatment systems require artificial aeration. Treatment ponds are small versions of constructed wetlands which uses reedbeds or other marshland plants to form an even smaller water treatment system . Similar to constructed wetlands, water trickling through the reedbed is cleaned by microorganisms living on the root system and in the litter. Treatment ponds are used for the water treatment of a single house or a small neighbourhood.
https://en.wikipedia.org/wiki/Reed_bed
The Reed reaction is a chemical reaction that utilizes light to oxidize hydrocarbons to alkyl sulfonyl chlorides . This reaction is employed in modifying polyethylene to give chlorosulfonated polyethylene (CSPE), which is noted for its toughness. [ 1 ] Polyethylene is treated with a mixture of chlorine and sulfur dioxide under UV-radiation. Vinylsulfonic acid can also be prepared beginning with the sulfochlorination of chloroethane . Dehydrohalogenation of the product gives vinylsulfonyl chloride, which subsequently is hydrolyzed to give vinylsulfonic acid : The reaction occurs via a free radical mechanism. UV-light initiates homolysis of chlorine , producing a pair of chlorine atoms: Chain initiation: Thereafter a chlorine atom attacks the hydrocarbon chain, freeing hydrogen to form hydrogen chloride and an alkyl free radical. The resulting radical then captures SO 2 . The resulting sulfonyl radical attacks another chlorine molecule to produce the desired sulfonyl chloride and a new chlorine atom, which continues the reaction chain. Chain propagation steps:
https://en.wikipedia.org/wiki/Reed_reaction
Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. [ 1 ] Moreover, the proposed 5G standard [ 2 ] relies on the closely related polar codes [ 3 ] for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science . Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code . Reed–Muller codes are linear block codes that are locally testable , locally decodable , and list decodable . These properties make them particularly useful in the design of probabilistically checkable proofs . Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When r and m are integers with 0 ≤ r ≤ m , the Reed–Muller code with parameters r and m is denoted as RM( r , m ). When asked to encode a message consisting of k bits, where k = ∑ i = 0 r ( m i ) {\displaystyle \textstyle k=\sum _{i=0}^{r}{\binom {m}{i}}} holds, the RM( r , m ) code produces a codeword consisting of 2 m bits. Reed–Muller codes are named after David E. Muller , who discovered the codes in 1954, [ 4 ] and Irving S. Reed , who proposed the first efficient decoding algorithm. [ 5 ] Reed–Muller codes can be described in several different (but ultimately equivalent) ways. The description that is based on low-degree polynomials is quite elegant and particularly suited for their application as locally testable codes and locally decodable codes . [ 6 ] A block code can have one or more encoding functions C : { 0 , 1 } k → { 0 , 1 } n {\textstyle C:\{0,1\}^{k}\to \{0,1\}^{n}} that map messages x ∈ { 0 , 1 } k {\textstyle x\in \{0,1\}^{k}} to codewords C ( x ) ∈ { 0 , 1 } n {\textstyle C(x)\in \{0,1\}^{n}} . The Reed–Muller code RM( r , m ) has message length k = ∑ i = 0 r ( m i ) {\displaystyle \textstyle k=\sum _{i=0}^{r}{\binom {m}{i}}} and block length n = 2 m {\displaystyle \textstyle n=2^{m}} . One way to define an encoding for this code is based on the evaluation of multilinear polynomials with m variables and total degree at most r . Every multilinear polynomial over the finite field with two elements can be written as follows: p c ( Z 1 , … , Z m ) = ∑ S ⊆ { 1 , … , m } | S | ≤ r c S ⋅ ∏ i ∈ S Z i . {\displaystyle p_{c}(Z_{1},\dots ,Z_{m})=\sum _{\underset {|S|\leq r}{S\subseteq \{1,\dots ,m\}}}c_{S}\cdot \prod _{i\in S}Z_{i}\,.} The Z 1 , … , Z m {\textstyle Z_{1},\dots ,Z_{m}} are the variables of the polynomial, and the values c S ∈ { 0 , 1 } {\textstyle c_{S}\in \{0,1\}} are the coefficients of the polynomial. Note that there are exactly k = ∑ i = 0 r ( m i ) {\textstyle k=\sum _{i=0}^{r}{\binom {m}{i}}} coefficients. With this in mind, an input message consists of k {\textstyle k} values x ∈ { 0 , 1 } k {\textstyle x\in \{0,1\}^{k}} which are used as these coefficients. In this way, each message x {\textstyle x} gives rise to a unique polynomial p x {\textstyle p_{x}} in m variables. To construct the codeword C ( x ) {\textstyle C(x)} , the encoder evaluates the polynomial p x {\textstyle p_{x}} at all points Z = ( Z 1 , … , Z m ) ∈ { 0 , 1 } m {\textstyle Z=(Z_{1},\ldots ,Z_{m})\in \{0,1\}^{m}} , where the polynomial is taken with multiplication and addition mod 2 ( p x ( Z ) mod 2 ) ∈ { 0 , 1 } {\textstyle (p_{x}(Z){\bmod {2}})\in \{0,1\}} . That is, the encoding function is defined via C ( x ) = ( p x ( Z ) mod 2 ) Z ∈ { 0 , 1 } m . {\displaystyle C(x)=\left(p_{x}(Z){\bmod {2}}\right)_{Z\in \{0,1\}^{m}}\,.} The fact that the codeword C ( x ) {\displaystyle C(x)} suffices to uniquely reconstruct x {\displaystyle x} follows from Lagrange interpolation , which states that the coefficients of a polynomial are uniquely determined when sufficiently many evaluation points are given. Since C ( 0 ) = 0 {\displaystyle C(0)=0} and C ( x + y ) = C ( x ) + C ( y ) mod 2 {\displaystyle C(x+y)=C(x)+C(y){\bmod {2}}} holds for all messages x , y ∈ { 0 , 1 } k {\displaystyle x,y\in \{0,1\}^{k}} , the function C {\displaystyle C} is a linear map . Thus the Reed–Muller code is a linear code . For the code RM( 2 , 4 ) , the parameters are as follows: r = 2 m = 4 k = ( 4 2 ) + ( 4 1 ) + ( 4 0 ) = 6 + 4 + 1 = 11 n = 2 m = 16 {\textstyle {\begin{aligned}r&=2\\m&=4\\k&=\textstyle {\binom {4}{2}}+{\binom {4}{1}}+{\binom {4}{0}}=6+4+1=11\\n&=2^{m}=16\\\end{aligned}}} Let C : { 0 , 1 } 11 → { 0 , 1 } 16 {\textstyle C:\{0,1\}^{11}\to \{0,1\}^{16}} be the encoding function just defined. To encode the string x = 1 1010 010101 of length 11, the encoder first constructs the polynomial p x {\textstyle p_{x}} in 4 variables: p x ( Z 1 , Z 2 , Z 3 , Z 4 ) = 1 + ( 1 ⋅ Z 1 + 0 ⋅ Z 2 + 1 ⋅ Z 3 + 0 ⋅ Z 4 ) + ( 0 ⋅ Z 1 Z 2 + 1 ⋅ Z 1 Z 3 + 0 ⋅ Z 1 Z 4 + 1 ⋅ Z 2 Z 3 + 0 ⋅ Z 2 Z 4 + 1 ⋅ Z 3 Z 4 ) = 1 + Z 1 + Z 3 + Z 1 Z 3 + Z 2 Z 3 + Z 3 Z 4 {\displaystyle {\begin{aligned}p_{x}(Z_{1},Z_{2},Z_{3},Z_{4})&=1+(1\cdot Z_{1}+0\cdot Z_{2}+1\cdot Z_{3}+0\cdot Z_{4})+(0\cdot Z_{1}Z_{2}+1\cdot Z_{1}Z_{3}+0\cdot Z_{1}Z_{4}+1\cdot Z_{2}Z_{3}+0\cdot Z_{2}Z_{4}+1\cdot Z_{3}Z_{4})\\&=1+Z_{1}+Z_{3}+Z_{1}Z_{3}+Z_{2}Z_{3}+Z_{3}Z_{4}\end{aligned}}} Then it evaluates this polynomial at all 16 evaluation points (0101 means Z 1 = 0 , Z 2 = 1 , Z 3 = 0 , Z 4 = 1 ) {\displaystyle Z_{1}=0,Z_{2}=1,Z_{3}=0,Z_{4}=1)} : p x ( 0000 ) = 1 , p x ( 0001 ) = 1 , p x ( 0010 ) = 0 , p x ( 0011 ) = 1 , {\displaystyle p_{x}(0000)=1,\;p_{x}(0001)=1,\;p_{x}(0010)=0,\;p_{x}(0011)=1,\;} p x ( 0100 ) = 1 , p x ( 0101 ) = 1 , p x ( 0110 ) = 1 , p x ( 0111 ) = 0 , {\displaystyle p_{x}(0100)=1,\;p_{x}(0101)=1,\;p_{x}(0110)=1,\;p_{x}(0111)=0,\;} p x ( 1000 ) = 0 , p x ( 1001 ) = 0 , p x ( 1010 ) = 0 , p x ( 1011 ) = 1 , {\displaystyle p_{x}(1000)=0,\;p_{x}(1001)=0,\;p_{x}(1010)=0,\;p_{x}(1011)=1,\;} p x ( 1100 ) = 0 , p x ( 1101 ) = 0 , p x ( 1110 ) = 1 , p x ( 1111 ) = 0 . {\displaystyle p_{x}(1100)=0,\;p_{x}(1101)=0,\;p_{x}(1110)=1,\;p_{x}(1111)=0\,.} As a result, C(1 1010 010101) = 1101 1110 0001 0010 holds. As was already mentioned, Lagrange interpolation can be used to efficiently retrieve the message from a codeword. However, a decoder needs to work even if the codeword has been corrupted in a few positions, that is, when the received word is different from any codeword. In this case, a local decoding procedure can help. The algorithm from Reed is based on the following property: you start from the code word, that is a sequence of evaluation points from an unknown polynomial p x {\textstyle p_{x}} of F 2 [ X 1 , X 2 , . . . , X m ] {\textstyle {\mathbb {F} }_{2}[X_{1},X_{2},...,X_{m}]} of degree at most r {\textstyle r} that you want to find. The sequence may contains any number of errors up to 2 m − r − 1 − 1 {\textstyle 2^{m-r-1}-1} included. If you consider a monomial μ {\textstyle \mu } of the highest degree d {\textstyle d} in p x {\textstyle p_{x}} and sum all the evaluation points of the polynomial where all variables in μ {\textstyle \mu } have the values 0 or 1, and all the other variables have value 0, you get the value of the coefficient (0 or 1) of μ {\textstyle \mu } in p x {\textstyle p_{x}} (There are 2 d {\textstyle 2^{d}} such points). This is due to the fact that all lower monomial divisors of μ {\textstyle \mu } appears an even number of time in the sum, and only μ {\textstyle \mu } appears once. To take into account the possibility of errors, you can also remark that you can fix the value of other variables to any value. So instead of doing the sum only once for other variables not in μ {\textstyle \mu } with 0 value, you do it 2 m − d {\textstyle 2^{m-d}} times for each fixed valuations of the other variables. If there is no error, all those sums should be equals to the value of the coefficient searched. The algorithm consists here to take the majority of the answers as the value searched. If the minority is larger than the maximum number of errors possible, the decoding step fails knowing there are too many errors in the input code. Once a coefficient is computed, if it's 1, update the code to remove the monomial μ {\textstyle \mu } from the input code and continue to next monomial, in reverse order of their degree. Let's consider the previous example and start from the code. With r = 2 , m = 4 {\textstyle r=2,m=4} we can fix at most 1 error in the code. Consider the input code as 1101 1110 0001 0110 (this is the previous code with one error). We know the degree of the polynomial p x {\textstyle p_{x}} is at most r = 2 {\textstyle r=2} , we start by searching for monomial of degree 2. The four sums don't agree (so we know there is an error), but the minority report is not larger than the maximum number of error allowed (1), so we take the majority and the coefficient of μ {\textstyle \mu } is 1. We remove μ {\textstyle \mu } from the code before continue : code : 1101 1110 0001 0110, valuation of μ {\textstyle \mu } is 0001000100010001, the new code is 1100 1111 0000 0111 One error detected, coefficient is 0, no change to current code. One error detected, coefficient is 0, no change to current code. One error detected, coefficient is 1, valuation of μ {\textstyle \mu } is 0000 0011 0000 0011, current code is now 1100 1100 0000 0100. One error detected, coefficient is 1, valuation of μ {\textstyle \mu } is 0000 0000 0011 0011, current code is now 1100 1100 0011 0111. One error detected, coefficient is 0, no change to current code. We know now all coefficient of degree 2 for the polynomial, we can start mononials of degree 1. Notice that for each next degree, there are twice as much sums, and each sums is half smaller. One error detected, coefficient is 0, no change to current code. One error detected, coefficient is 1, valuation of μ {\textstyle \mu } is 0011 0011 0011 0011, current code is now 1111 1111 0000 0100. Then we'll find 0 for μ = X 2 {\textstyle \mu =X_{2}} , 1 for μ = X 1 {\textstyle \mu =X_{1}} and the current code become 1111 1111 1111 1011. For the degree 0, we have 16 sums of only 1 bit. The minority is still of size 1, and we found p x = 1 + X 1 + X 3 + X 1 X 3 + X 2 X 3 + X 3 X 4 {\textstyle p_{x}=1+X_{1}+X_{3}+X_{1}X_{3}+X_{2}X_{3}+X_{3}X_{4}} and the corresponding initial word 1 1010 010101 Using low-degree polynomials over a finite field F {\displaystyle \mathbb {F} } of size q {\displaystyle q} , it is possible to extend the definition of Reed–Muller codes to alphabets of size q {\displaystyle q} . Let m {\displaystyle m} and d {\displaystyle d} be positive integers, where m {\displaystyle m} should be thought of as larger than d {\displaystyle d} . To encode a message x ∈ F k {\textstyle x\in \mathbb {F} ^{k}} of width k = ( m + d m ) {\displaystyle k=\textstyle {\binom {m+d}{m}}} , the message is again interpreted as an m {\displaystyle m} -variate polynomial p x {\displaystyle p_{x}} of total degree at most d {\displaystyle d} and with coefficient from F {\displaystyle \mathbb {F} } . Such a polynomial indeed has ( m + d m ) {\displaystyle \textstyle {\binom {m+d}{m}}} coefficients. The Reed–Muller encoding of x {\displaystyle x} is the list of all evaluations of p x ( a ) {\displaystyle p_{x}(a)} over all a ∈ F m {\displaystyle a\in \mathbb {F} ^{m}} . Thus the block length is n = q m {\displaystyle n=q^{m}} . A generator matrix for a Reed–Muller code RM( r , m ) of length N = 2 m can be constructed as follows. Let us write the set of all m -dimensional binary vectors as: We define in N -dimensional space F 2 N {\displaystyle \mathbb {F} _{2}^{N}} the indicator vectors on subsets A ⊂ X {\displaystyle A\subset X} by: together with, also in F 2 N {\displaystyle \mathbb {F} _{2}^{N}} , the binary operation referred to as the wedge product (not to be confused with the wedge product defined in exterior algebra). Here, w = ( w 1 , w 2 , … , w N ) {\displaystyle w=(w_{1},w_{2},\ldots ,w_{N})} and z = ( z 1 , z 2 , … , z N ) {\displaystyle z=(z_{1},z_{2},\ldots ,z_{N})} are points in F 2 N {\displaystyle \mathbb {F} _{2}^{N}} ( N -dimensional binary vectors), and the operation ⋅ {\displaystyle \cdot } is the usual multiplication in the field F 2 {\displaystyle \mathbb {F} _{2}} . F 2 m {\displaystyle \mathbb {F} _{2}^{m}} is an m -dimensional vector space over the field F 2 {\displaystyle \mathbb {F} _{2}} , so it is possible to write ( F 2 ) m = { ( y m , … , y 1 ) ∣ y i ∈ F 2 } . {\displaystyle (\mathbb {F} _{2})^{m}=\{(y_{m},\ldots ,y_{1})\mid y_{i}\in \mathbb {F} _{2}\}.} We define in N -dimensional space F 2 N {\displaystyle \mathbb {F} _{2}^{N}} the following vectors with length N : v 0 = ( 1 , 1 , … , 1 ) {\displaystyle N:v_{0}=(1,1,\ldots ,1)} and where 1 ≤ i ≤ m and the H i are hyperplanes in ( F 2 ) m {\displaystyle (\mathbb {F} _{2})^{m}} (with dimension m − 1 ): The Reed–Muller RM( r , m ) code of order r and length N = 2 m is the code generated by v 0 and the wedge products of up to r of the v i , 1 ≤ i ≤ m (where by convention a wedge product of fewer than one vector is the identity for the operation). In other words, we can build a generator matrix for the RM( r , m ) code, using vectors and their wedge product permutations up to r at a time v 0 , v 1 , … , v n , … , ( v i 1 ∧ v i 2 ) , … ( v i 1 ∧ v i 2 … ∧ v i r ) {\displaystyle {v_{0},v_{1},\ldots ,v_{n},\ldots ,(v_{i_{1}}\wedge v_{i_{2}}),\ldots (v_{i_{1}}\wedge v_{i_{2}}\ldots \wedge v_{i_{r}})}} , as the rows of the generator matrix, where 1 ≤ i k ≤ m . Let m = 3. Then N = 8, and and The RM(1,3) code is generated by the set or more explicitly by the rows of the matrix: The RM(2,3) code is generated by the set: or more explicitly by the rows of the matrix: The following properties hold: such vectors and F 2 N {\displaystyle \mathbb {F} _{2}^{N}} have dimension N so it is sufficient to check that the N vectors span; equivalently it is sufficient to check that R M ( m , m ) = F 2 N {\displaystyle \mathrm {RM} (m,m)=\mathbb {F} _{2}^{N}} . Let x be a binary vector of length m , an element of X . Let ( x ) i denote the i th element of x . Define where 1 ≤ i ≤ m . Then I { x } = y 1 ∧ ⋯ ∧ y m {\displaystyle \mathbb {I} _{\{x\}}=y_{1}\wedge \cdots \wedge y_{m}} RM( r , m ) codes can be decoded using majority logic decoding . The basic idea of majority logic decoding is to build several checksums for each received code word element. Since each of the different checksums must all have the same value (i.e. the value of the message word element weight), we can use a majority logic decoding to decipher the value of the message word element. Once each order of the polynomial is decoded, the received word is modified accordingly by removing the corresponding codewords weighted by the decoded message contributions, up to the present stage. So for a r th order RM code, we have to decode iteratively r+1, times before we arrive at the final received code-word. Also, the values of the message bits are calculated through this scheme; finally we can calculate the codeword by multiplying the message word (just decoded) with the generator matrix. One clue if the decoding succeeded, is to have an all-zero modified received word, at the end of ( r + 1)-stage decoding through the majority logic decoding. This technique was proposed by Irving S. Reed, and is more general when applied to other finite geometry codes. A Reed–Muller code RM( r,m ) exists for any integers m ≥ 0 {\displaystyle m\geq 0} and 0 ≤ r ≤ m {\displaystyle 0\leq r\leq m} . RM( m , m ) is defined as the universe ( 2 m , 2 m , 1 {\displaystyle 2^{m},2^{m},1} ) code. RM(−1,m) is defined as the trivial code ( 2 m , 0 , ∞ {\displaystyle 2^{m},0,\infty } ). The remaining RM codes may be constructed from these elementary codes using the length-doubling construction From this construction, RM( r,m ) is a binary linear block code ( n , k , d ) with length n = 2 m , dimension k ( r , m ) = k ( r , m − 1 ) + k ( r − 1 , m − 1 ) {\displaystyle k(r,m)=k(r,m-1)+k(r-1,m-1)} and minimum distance d = 2 m − r {\displaystyle d=2^{m-r}} for r ≥ 0 {\displaystyle r\geq 0} . The dual code to RM( r,m ) is RM( m - r -1, m ). This shows that repetition and SPC codes are duals, biorthogonal and extended Hamming codes are duals and that codes with k = n /2 are self-dual. All RM( r , m ) codes with 0 ≤ m ≤ 5 {\displaystyle 0\leq m\leq 5} and alphabet size 2 are displayed here, annotated with the standard [n,k,d] coding theory notation for block codes . The code RM( r , m ) is a [ 2 m , k , 2 m − r ] 2 {\displaystyle \textstyle [2^{m},k,2^{m-r}]_{2}} -code, that is, it is a linear code over a binary alphabet , has block length 2 m {\displaystyle \textstyle 2^{m}} , message length (or dimension) k , and minimum distance 2 m − r {\displaystyle \textstyle 2^{m-r}} .
https://en.wikipedia.org/wiki/Reed–Muller_code
In information theory and coding theory , Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. [ 1 ] They have many applications, including consumer technologies such as MiniDiscs , CDs , DVDs , Blu-ray discs, QR codes , Data Matrix , data transmission technologies such as DSL and WiMAX , broadcast systems such as satellite communications, DVB and ATSC , and storage systems such as RAID 6 . Reed–Solomon codes operate on a block of data treated as a set of finite-field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding t = n − k check symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up to t erroneous symbols, or locate and correct up to ⌊ t /2⌋ erroneous symbols at unknown locations. As an erasure code , it can correct up to t erasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple- burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can affect at most two symbols of size b . The choice of t is up to the designer of the code and may be selected within wide limits. There are two basic types of Reed–Solomon codes – original view and BCH view – with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders. Reed–Solomon codes were developed in 1960 by Irving S. Reed and Gustave Solomon , who were then staff members of MIT Lincoln Laboratory . Their seminal article was titled "Polynomial Codes over Certain Finite Fields". [ 1 ] The original encoding scheme described in the Reed and Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of k (unencoded message length) out of n (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a BCH-code -like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed–Solomon codes: ones that use the original encoding scheme and ones that use the BCH encoding scheme. Also in 1960, a practical fixed polynomial decoder for BCH codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in an article in June 1961. [ 2 ] The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book "Error-Correcting Codes" by W. Wesley Peterson (1961). [ 3 ] [ page needed ] By 1963 (or possibly earlier), J.J. Stone (and others) [ who? ] recognized that Reed–Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes, [ 4 ] but Reed–Solomon codes based on the original encoding scheme are not a class of BCH codes, and depending on the set of evaluation points, they are not even cyclic codes . In 1969, an improved BCH scheme decoder was developed by Elwyn Berlekamp and James Massey and has since been known as the Berlekamp–Massey decoding algorithm . In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm . [ 5 ] In 1977, Reed–Solomon codes were implemented in the Voyager program in the form of concatenated error correction codes . The first commercial application in mass-produced consumer products appeared in 1982 with the compact disc , where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in digital storage devices and digital communication standards, though they are being slowly replaced by Bose–Chaudhuri–Hocquenghem (BCH) codes . For example, Reed–Solomon codes are used in the Digital Video Broadcasting (DVB) standard DVB-S , in conjunction with a convolutional inner code , but BCH codes are used with LDPC in its successor, DVB-S2 . In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed. In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders (see Guruswami–Sudan list decoding algorithm ). In 2002, another original scheme decoder was developed by Shuhong Gao, based on the extended Euclidean algorithm . [ 6 ] Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors associated with media defects. Reed–Solomon coding is a key component of the compact disc . It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and DVD use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding ( CIRC ). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block. The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts. [ 7 ] DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code. Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET . The distributed online storage service Wuala (discontinued in 2015) also used Reed–Solomon when breaking up files. Almost all two-dimensional bar codes such as PDF-417 , MaxiCode , Datamatrix , QR Code , Aztec Code and Han Xin code use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure. Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology. Specialized forms of Reed–Solomon codes, specifically Cauchy -RS and Vandermonde -RS, can be used to overcome the unreliable nature of data transmission over erasure channels . The encoding process assumes a code of RS( N , K ) which results in N codewords of length N symbols each storing K symbols of data, being generated, that are then sent over an erasure channel. Any combination of K codewords received at the other end is enough to reconstruct all of the N codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, N is usually 2 K , meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent. Reed–Solomon codes are also used in xDSL systems and CCSDS 's Space Communications Protocol Specifications as a form of forward error correction . One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the Voyager program . Voyager introduced Reed–Solomon coding concatenated with convolutional codes , a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications. Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes. Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the Mars Pathfinder , Galileo , Mars Exploration Rover and Cassini missions, where they perform within about 1–1.5 dB of the ultimate limit, the Shannon capacity . These concatenated codes are now being replaced by more powerful turbo codes : The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an alphabet size q , a block length n , and a message length k , with k < n ≤ q {\displaystyle k<n\leq q} . The set of alphabet symbols is interpreted as the finite field F {\displaystyle F} of order q {\displaystyle q} , and thus, q {\displaystyle q} must be a prime power . In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate R = k n {\displaystyle R={\frac {k}{n}}} is some constant, and furthermore, the block length is either equal to the alphabet size or one less than it, i.e., n = q {\displaystyle n=q} or n = q − 1 {\displaystyle n=q-1} . [ citation needed ] There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords. In the original view of Reed and Solomon, every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than k {\displaystyle k} . [ 1 ] In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomial p {\displaystyle p} of degree less than k {\displaystyle k} , over the finite field F {\displaystyle F} with q {\displaystyle q} elements. In turn, the polynomial p {\displaystyle p} is evaluated at n ≤ q {\displaystyle n\leq q} distinct points a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} of the field F {\displaystyle F} , and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include { 0 , 1 , 2 , … , n − 1 } {\displaystyle \{0,1,2,\dots ,n-1\}} , { 0 , 1 , α , α 2 , … , α n − 2 } {\displaystyle \{0,1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-2}\}} , or for n < q {\displaystyle n<q} , { 1 , α , α 2 , … , α n − 1 } {\displaystyle \{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}} , ... , where α {\displaystyle \alpha } is a primitive element of F {\displaystyle F} . Formally, the set C {\displaystyle \mathbf {C} } of codewords of the Reed–Solomon code is defined as follows: C = { ( p ( a 1 ) , p ( a 2 ) , … , p ( a n ) ) | p is a polynomial over F of degree < k } . {\displaystyle \mathbf {C} ={\Bigl \{}\;{\bigl (}p(a_{1}),p(a_{2}),\dots ,p(a_{n}){\bigr )}\;{\Big |}\;p{\text{ is a polynomial over }}F{\text{ of degree }}<k\;{\Bigr \}}\,.} Since any two distinct polynomials of degree less than k {\displaystyle k} agree in at most k − 1 {\displaystyle k-1} points, this means that any two codewords of the Reed–Solomon code disagree in at least n − ( k − 1 ) = n − k + 1 {\displaystyle n-(k-1)=n-k+1} positions. Furthermore, there are two polynomials that do agree in k − 1 {\displaystyle k-1} points but are not equal, and thus, the distance of the Reed–Solomon code is exactly d = n − k + 1 {\displaystyle d=n-k+1} . Then the relative distance is δ = d / n = 1 − k / n + 1 / n = 1 − R + 1 / n ∼ 1 − R {\displaystyle \delta =d/n=1-k/n+1/n=1-R+1/n\sim 1-R} , where R = k / n {\displaystyle R=k/n} is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by the Singleton bound , every code satisfies δ + R ≤ 1 + 1 / n {\displaystyle \delta +R\leq 1+1/n} . Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of maximum distance separable codes . While the number of different polynomials of degree less than k and the number of different messages are both equal to q k {\displaystyle q^{k}} , and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of Reed & Solomon interprets the message x as the coefficients of the polynomial p , whereas subsequent constructions interpret the message as the values of the polynomial at the first k points a 1 , … , a k {\displaystyle a_{1},\dots ,a_{k}} and obtain the polynomial p by interpolating these values with a polynomial of degree less than k . The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a systematic code , that is, the original message is always contained as a subsequence of the codeword. [ 1 ] In the original construction of Reed and Solomon, the message m = ( m 0 , … , m k − 1 ) ∈ F k {\displaystyle m=(m_{0},\dots ,m_{k-1})\in F^{k}} is mapped to the polynomial p m {\displaystyle p_{m}} with p m ( a ) = ∑ i = 0 k − 1 m i a i . {\displaystyle p_{m}(a)=\sum _{i=0}^{k-1}m_{i}a^{i}\,.} The codeword of m {\displaystyle m} is obtained by evaluating p m {\displaystyle p_{m}} at n {\displaystyle n} different points a 0 , … , a n − 1 {\displaystyle a_{0},\dots ,a_{n-1}} of the field F {\displaystyle F} . [ 1 ] Thus the classical encoding function C : F k → F n {\displaystyle C:F^{k}\to F^{n}} for the Reed–Solomon code is defined as follows: C ( m ) = [ p m ( a 0 ) p m ( a 1 ) ⋯ p m ( a n − 1 ) ] {\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}} This function C {\displaystyle C} is a linear mapping , that is, it satisfies C ( m ) = A m {\displaystyle C(m)=Am} for the following n × k {\displaystyle n\times k} -matrix A {\displaystyle A} with elements from F {\displaystyle F} : C ( m ) = A m = [ 1 a 0 a 0 2 … a 0 k − 1 1 a 1 a 1 2 … a 1 k − 1 ⋮ ⋮ ⋮ ⋱ ⋮ 1 a n − 1 a n − 1 2 … a n − 1 k − 1 ] [ m 0 m 1 ⋮ m k − 1 ] {\displaystyle C(m)=Am={\begin{bmatrix}1&a_{0}&a_{0}^{2}&\dots &a_{0}^{k-1}\\1&a_{1}&a_{1}^{2}&\dots &a_{1}^{k-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&a_{n-1}&a_{n-1}^{2}&\dots &a_{n-1}^{k-1}\end{bmatrix}}{\begin{bmatrix}m_{0}\\m_{1}\\\vdots \\m_{k-1}\end{bmatrix}}} This matrix is a Vandermonde matrix over F {\displaystyle F} . In other words, the Reed–Solomon code is a linear code , and in the classical encoding procedure, its generator matrix is A {\displaystyle A} . There are alternative encoding procedures that produce a systematic Reed–Solomon code. One method uses Lagrange interpolation to compute polynomial p m {\displaystyle p_{m}} such that p m ( a i ) = m i for all i ∈ { 0 , … , k − 1 } . {\displaystyle p_{m}(a_{i})=m_{i}{\text{ for all }}i\in \{0,\dots ,k-1\}.} Then p m {\displaystyle p_{m}} is evaluated at the other points a k , … , a n − 1 {\displaystyle a_{k},\dots ,a_{n-1}} . C ( m ) = [ p m ( a 0 ) p m ( a 1 ) ⋯ p m ( a n − 1 ) ] {\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}} This function C {\displaystyle C} is a linear mapping. To generate the corresponding systematic encoding matrix G, multiply the Vandermonde matrix A by the inverse of A's left square submatrix. G = ( A 's left square submatrix ) − 1 ⋅ A = [ 1 0 0 … 0 g 1 , k + 1 … g 1 , n 0 1 0 … 0 g 2 , k + 1 … g 2 , n 0 0 1 … 0 g 3 , k + 1 … g 3 , n ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 … 0 … 1 g k , k + 1 … g k , n ] {\displaystyle G=(A{\text{'s left square submatrix}})^{-1}\cdot A={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}} C ( m ) = G m {\displaystyle C(m)=Gm} for the following n × k {\displaystyle n\times k} -matrix G {\displaystyle G} with elements from F {\displaystyle F} : C ( m ) = G m = [ 1 0 0 … 0 g 1 , k + 1 … g 1 , n 0 1 0 … 0 g 2 , k + 1 … g 2 , n 0 0 1 … 0 g 3 , k + 1 … g 3 , n ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 … 0 … 1 g k , k + 1 … g k , n ] [ m 0 m 1 ⋮ m k − 1 ] {\displaystyle C(m)=Gm={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}{\begin{bmatrix}m_{0}\\m_{1}\\\vdots \\m_{k-1}\end{bmatrix}}} A discrete Fourier transform is essentially the same as the encoding procedure; it uses the generator polynomial p m {\displaystyle p_{m}} to map a set of evaluation points into the message values as shown above: C ( m ) = [ p m ( a 0 ) p m ( a 1 ) ⋯ p m ( a n − 1 ) ] {\displaystyle C(m)={\begin{bmatrix}p_{m}(a_{0})\\p_{m}(a_{1})\\\cdots \\p_{m}(a_{n-1})\end{bmatrix}}} The inverse Fourier transform could be used to convert an error free set of n < q message values back into the encoding polynomial of k coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of α : a i = α i {\displaystyle a_{i}=\alpha ^{i}} a 0 , … , a n − 1 = { 1 , α , α 2 , … , α n − 1 } {\displaystyle a_{0},\dots ,a_{n-1}=\{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}} However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder . In this view, the message is interpreted as the coefficients of a polynomial p ( x ) {\displaystyle p(x)} . The sender computes a related polynomial s ( x ) {\displaystyle s(x)} of degree n − 1 {\displaystyle n-1} where n ≤ q − 1 {\displaystyle n\leq q-1} and sends the polynomial s ( x ) {\displaystyle s(x)} . The polynomial s ( x ) {\displaystyle s(x)} is constructed by multiplying the message polynomial p ( x ) {\displaystyle p(x)} , which has degree k − 1 {\displaystyle k-1} , with a generator polynomial g ( x ) {\displaystyle g(x)} of degree n − k {\displaystyle n-k} that is known to both the sender and the receiver. The generator polynomial g ( x ) {\displaystyle g(x)} is defined as the polynomial whose roots are sequential powers of the Galois field primitive α {\displaystyle \alpha } g ( x ) = ( x − α i ) ( x − α i + 1 ) ⋯ ( x − α i + n − k − 1 ) = g 0 + g 1 x + ⋯ + g n − k − 1 x n − k − 1 + x n − k {\displaystyle g(x)=\left(x-\alpha ^{i}\right)\left(x-\alpha ^{i+1}\right)\cdots \left(x-\alpha ^{i+n-k-1}\right)=g_{0}+g_{1}x+\cdots +g_{n-k-1}x^{n-k-1}+x^{n-k}} For a "narrow sense code", i = 1 {\displaystyle i=1} . C = { ( s 1 , s 2 , … , s n ) | s ( a ) = ∑ i = 1 n s i a i is a polynomial that has at least the roots α 1 , α 2 , … , α n − k } . {\displaystyle \mathbf {C} =\left\{\left(s_{1},s_{2},\dots ,s_{n}\right)\;{\Big |}\;s(a)=\sum _{i=1}^{n}s_{i}a^{i}{\text{ is a polynomial that has at least the roots }}\alpha ^{1},\alpha ^{2},\dots ,\alpha ^{n-k}\right\}.} The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a systematic encoding procedure , in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sending s ( x ) = p ( x ) g ( x ) {\displaystyle s(x)=p(x)g(x)} , the encoder constructs the transmitted polynomial s ( x ) {\displaystyle s(x)} such that the coefficients of the k {\displaystyle k} largest monomials are equal to the corresponding coefficients of p ( x ) {\displaystyle p(x)} , and the lower-order coefficients of s ( x ) {\displaystyle s(x)} are chosen exactly in such a way that s ( x ) {\displaystyle s(x)} becomes divisible by g ( x ) {\displaystyle g(x)} . Then the coefficients of p ( x ) {\displaystyle p(x)} are a subsequence of the coefficients of s ( x ) {\displaystyle s(x)} . To get a code that is overall systematic, we construct the message polynomial p ( x ) {\displaystyle p(x)} by interpreting the message as the sequence of its coefficients. Formally, the construction is done by multiplying p ( x ) {\displaystyle p(x)} by x t {\displaystyle x^{t}} to make room for the t = n − k {\displaystyle t=n-k} check symbols, dividing that product by g ( x ) {\displaystyle g(x)} to find the remainder, and then compensating for that remainder by subtracting it. The t {\displaystyle t} check symbols are created by computing the remainder s r ( x ) {\displaystyle s_{r}(x)} : s r ( x ) = p ( x ) ⋅ x t mod g ( x ) . {\displaystyle s_{r}(x)=p(x)\cdot x^{t}\ {\bmod {\ }}g(x).} The remainder has degree at most t − 1 {\displaystyle t-1} , whereas the coefficients of x t − 1 , x t − 2 , … , x 1 , x 0 {\displaystyle x^{t-1},x^{t-2},\dots ,x^{1},x^{0}} in the polynomial p ( x ) ⋅ x t {\displaystyle p(x)\cdot x^{t}} are zero. Therefore, the following definition of the codeword s ( x ) {\displaystyle s(x)} has the property that the first k {\displaystyle k} coefficients are identical to the coefficients of p ( x ) {\displaystyle p(x)} : s ( x ) = p ( x ) ⋅ x t − s r ( x ) . {\displaystyle s(x)=p(x)\cdot x^{t}-s_{r}(x)\,.} As a result, the codewords s ( x ) {\displaystyle s(x)} are indeed elements of C {\displaystyle \mathbf {C} } , that is, they are divisible by the generator polynomial g ( x ) {\displaystyle g(x)} : [ 10 ] s ( x ) ≡ p ( x ) ⋅ x t − s r ( x ) ≡ s r ( x ) − s r ( x ) ≡ 0 mod g ( x ) . {\displaystyle s(x)\equiv p(x)\cdot x^{t}-s_{r}(x)\equiv s_{r}(x)-s_{r}(x)\equiv 0\mod g(x)\,.} This function s {\displaystyle s} is a linear mapping. To generate the corresponding systematic encoding matrix G, set G's left square submatrix to the identity matrix and then encode each row: G = [ 1 0 0 … 0 g 1 , k + 1 … g 1 , n 0 1 0 … 0 g 2 , k + 1 … g 2 , n 0 0 1 … 0 g 3 , k + 1 … g 3 , n ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 … 0 … 1 g k , k + 1 … g k , n ] {\displaystyle G={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}} Ignoring leading zeroes, the last row = g ( x ) {\displaystyle g(x)} . C ( m ) = m G {\displaystyle C(m)=mG} for the following n × k {\displaystyle n\times k} -matrix G {\displaystyle G} with elements from F {\displaystyle F} : C ( m ) = m G = [ m 0 m 1 … m k − 1 ] [ 1 0 0 … 0 g 1 , k + 1 … g 1 , n 0 1 0 … 0 g 2 , k + 1 … g 2 , n 0 0 1 … 0 g 3 , k + 1 … g 3 , n ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 … 0 … 1 g k , k + 1 … g k , n ] {\displaystyle C(m)=mG={\begin{bmatrix}m_{0}&m_{1}&\ldots &m_{k-1}\end{bmatrix}}{\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}} The Reed–Solomon code is a [ n , k , n − k + 1] code; in other words, it is a linear block code of length n (over F ) with dimension k and minimum Hamming distance d min = n − k + 1. {\textstyle d_{\min }=n-k+1.} The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size ( n , k ); this is known as the Singleton bound . Such a code is also called a maximum distance separable (MDS) code . The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by n − k {\displaystyle n-k} , the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to ( n − k ) / 2 {\displaystyle (n-k)/2} erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in demodulator signal-to-noise ratios )—these are called erasures . A Reed–Solomon code (like any MDS code ) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation 2 E + S ≤ n − k is satisfied, where E {\displaystyle E} is the number of errors and S {\displaystyle S} is the number of erasures in the block. The theoretical error bound can be described via the following formula for the AWGN channel for FSK : [ 11 ] P b ≈ 2 m − 1 2 m − 1 1 n ∑ ℓ = t + 1 n ℓ ( n ℓ ) P s ℓ ( 1 − P s ) n − ℓ {\displaystyle P_{b}\approx {\frac {2^{m-1}}{2^{m}-1}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }} and for other modulation schemes: P b ≈ 1 m 1 n ∑ ℓ = t + 1 n ℓ ( n ℓ ) P s ℓ ( 1 − P s ) n − ℓ {\displaystyle P_{b}\approx {\frac {1}{m}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }} where t = 1 2 ( d min − 1 ) {\textstyle t={\frac {1}{2}}(d_{\min }-1)} , P s = 1 − ( 1 − s ) h {\displaystyle P_{s}=1-(1-s)^{h}} , h = m log 2 ⁡ M {\displaystyle h={\frac {m}{\log _{2}M}}} , s {\displaystyle s} is the symbol error rate in uncoded AWGN case and M {\displaystyle M} is the modulation order. For practical uses of Reed–Solomon codes, it is common to use a finite field F {\displaystyle F} with 2 m {\displaystyle 2^{m}} elements. In this case, each symbol can be represented as an m {\displaystyle m} -bit value. The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is n = 2 m − 1 {\displaystyle n=2^{m}-1} . Thus a Reed–Solomon code operating on 8-bit symbols has n = 2 8 − 1 = 255 {\displaystyle n=2^{8}-1=255} symbols per block. (This is a very popular value because of the prevalence of byte-oriented computer systems.) The number k {\displaystyle k} , with k < n {\displaystyle k<n} , of data symbols in the block is a design parameter. A commonly used code encodes k = 223 {\displaystyle k=223} eight-bit data symbols plus 32 eight-bit parity symbols in an n = 255 {\displaystyle n=255} -symbol block; this is denoted as a ( n , k ) = ( 255 , 223 ) {\displaystyle (n,k)=(255,223)} code, and is capable of correcting up to 16 symbol errors per block. The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in bursts . This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code. The Reed–Solomon code, like the convolutional code , is a transparent code. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened ( see 'Remarks' at the end of this section ). The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding. Whether the Reed–Solomon code is cyclic or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if α {\displaystyle \alpha } is a primitive root of the field F {\displaystyle F} , then by definition all non-zero elements of F {\displaystyle F} take the form α i {\displaystyle \alpha ^{i}} for i ∈ { 1 , … , q − 1 } {\displaystyle i\in \{1,\dots ,q-1\}} , where q = | F | {\displaystyle q=|F|} . Each polynomial p {\displaystyle p} over F {\displaystyle F} gives rise to a codeword ( p ( α 1 ) , … , p ( α q − 1 ) ) {\displaystyle (p(\alpha ^{1}),\dots ,p(\alpha ^{q-1}))} . Since the function a ↦ p ( α a ) {\displaystyle a\mapsto p(\alpha a)} is also a polynomial of the same degree, this function gives rise to a codeword ( p ( α 2 ) , … , p ( α q ) ) {\displaystyle (p(\alpha ^{2}),\dots ,p(\alpha ^{q}))} ; since α q = α 1 {\displaystyle \alpha ^{q}=\alpha ^{1}} holds, this codeword is the cyclic left-shift of the original codeword derived from p {\displaystyle p} . So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code cyclic . Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic . Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes. The QR code, Ver 3 (29×29) uses interleaved blocks. The message has 26 data bytes and is encoded using two Reed-Solomon code blocks. Each block is a (255,233) Reed Solomon code shortened to a (35,13) code. The Delsarte–Goethals–Seidel [ 12 ] theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as puncturing allows omitting some of the encoded parity symbols. The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder. Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. [ 13 ] [ 14 ] The Gorenstein–Zierler decoder and the related work on BCH codes are described in the book Error Correcting Codes by W. Wesley Peterson (1961). [ 3 ] [ page needed ] The transmitted message, ( c 0 , … , c i , … , c n − 1 ) {\displaystyle (c_{0},\ldots ,c_{i},\ldots ,c_{n-1})} , is viewed as the coefficients of a polynomial s ( x ) = ∑ i = 0 n − 1 c i x i . {\displaystyle s(x)=\sum _{i=0}^{n-1}c_{i}x^{i}.} As a result of the Reed–Solomon encoding procedure, s ( x ) is divisible by the generator polynomial g ( x ) = ∏ j = 1 n − k ( x − α j ) , {\displaystyle g(x)=\prod _{j=1}^{n-k}(x-\alpha ^{j}),} where α is a primitive element. Since s ( x ) is a multiple of the generator g ( x ), it follows that it "inherits" all its roots: s ( x ) mod ( x − α j ) = g ( x ) mod ( x − α j ) = 0. {\displaystyle s(x){\bmod {(}}x-\alpha ^{j})=g(x){\bmod {(}}x-\alpha ^{j})=0.} Therefore, s ( α j ) = 0 , j = 1 , 2 , … , n − k . {\displaystyle s(\alpha ^{j})=0,\ j=1,2,\ldots ,n-k.} The transmitted polynomial is corrupted in transit by an error polynomial e ( x ) = ∑ i = 0 n − 1 e i x i {\displaystyle e(x)=\sum _{i=0}^{n-1}e_{i}x^{i}} to produce the received polynomial r ( x ) = s ( x ) + e ( x ) . {\displaystyle r(x)=s(x)+e(x).} Coefficient e i will be zero if there is no error at that power of x , and nonzero if there is an error. If there are ν errors at distinct powers i k of x , then e ( x ) = ∑ k = 1 ν e i k x i k . {\displaystyle e(x)=\sum _{k=1}^{\nu }e_{i_{k}}x^{i_{k}}.} The goal of the decoder is to find the number of errors ( ν ), the positions of the errors ( i k ), and the error values at those positions ( e i k ). From those, e ( x ) can be calculated and subtracted from r ( x ) to get the originally sent message s ( x ). The decoder starts by evaluating the polynomial as received at points α 1 … α n − k {\displaystyle \alpha ^{1}\dots \alpha ^{n-k}} . We call the results of that evaluation the "syndromes" S j . They are defined as S j = r ( α j ) = s ( α j ) + e ( α j ) = 0 + e ( α j ) = e ( α j ) = ∑ k = 1 ν e i k ( α j ) i k , j = 1 , 2 , … , n − k . {\displaystyle {\begin{aligned}S_{j}&=r(\alpha ^{j})=s(\alpha ^{j})+e(\alpha ^{j})=0+e(\alpha ^{j})\\&=e(\alpha ^{j})\\&=\sum _{k=1}^{\nu }e_{i_{k}}{(\alpha ^{j})}^{i_{k}},\quad j=1,2,\ldots ,n-k.\end{aligned}}} Note that s ( α j ) = 0 {\displaystyle s(\alpha ^{j})=0} because s ( x ) {\displaystyle s(x)} has roots at α j {\displaystyle \alpha ^{j}} , as shown in the previous section. The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit. For convenience, define the error locators X k and error values Y k as X k = α i k , Y k = e i k . {\displaystyle X_{k}=\alpha ^{i_{k}},\quad Y_{k}=e_{i_{k}}.} Then the syndromes can be written in terms of these error locators and error values as S j = ∑ k = 1 ν Y k X k j . {\displaystyle S_{j}=\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}.} This definition of the syndrome values is equivalent to the previous since ( α j ) i k = α j ⋅ i k = ( α i k ) j = X k j {\displaystyle {(\alpha ^{j})}^{i_{k}}=\alpha ^{j\cdot i_{k}}={(\alpha ^{i_{k}})}^{j}=X_{k}^{j}} . The syndromes give a system of n − k ≥ 2 ν equations in 2 ν unknowns, but that system of equations is nonlinear in the X k and does not have an obvious solution. However, if the X k were known (see below), then the syndrome equations provide a linear system of equations [ X 1 1 X 2 1 ⋯ X ν 1 X 1 2 X 2 2 ⋯ X ν 2 ⋮ ⋮ ⋱ ⋮ X 1 n − k X 2 n − k ⋯ X ν n − k ] [ Y 1 Y 2 ⋮ Y ν ] = [ S 1 S 2 ⋮ S n − k ] , {\displaystyle {\begin{bmatrix}X_{1}^{1}&X_{2}^{1}&\cdots &X_{\nu }^{1}\\X_{1}^{2}&X_{2}^{2}&\cdots &X_{\nu }^{2}\\\vdots &\vdots &\ddots &\vdots \\X_{1}^{n-k}&X_{2}^{n-k}&\cdots &X_{\nu }^{n-k}\\\end{bmatrix}}{\begin{bmatrix}Y_{1}\\Y_{2}\\\vdots \\Y_{\nu }\end{bmatrix}}={\begin{bmatrix}S_{1}\\S_{2}\\\vdots \\S_{n-k}\end{bmatrix}},} which can easily be solved for the Y k error values. Consequently, the problem is finding the X k , because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Y k In the variant of this algorithm where the locations of the errors are already known (when it is being used as an erasure code ), this is the end. The error locations ( X k ) are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to n − k {\displaystyle n-k} errors can be corrected. The rest of the algorithm serves to locate the errors and will require syndrome values up to 2 ν {\displaystyle 2\nu } , instead of just the ν {\displaystyle \nu } used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations. There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations X k . Define the error locator polynomial Λ( x ) as Λ ( x ) = ∏ k = 1 ν ( 1 − x X k ) = 1 + Λ 1 x 1 + Λ 2 x 2 + ⋯ + Λ ν x ν . {\displaystyle \Lambda (x)=\prod _{k=1}^{\nu }(1-xX_{k})=1+\Lambda _{1}x^{1}+\Lambda _{2}x^{2}+\cdots +\Lambda _{\nu }x^{\nu }.} The zeros of Λ( x ) are the reciprocals X k − 1 {\displaystyle X_{k}^{-1}} . This follows from the above product notation construction, since if x = X k − 1 {\displaystyle x=X_{k}^{-1}} , then one of the multiplied terms will be zero, ( 1 − X k − 1 ⋅ X k ) = 1 − 1 = 0 {\displaystyle (1-X_{k}^{-1}\cdot X_{k})=1-1=0} , making the whole polynomial evaluate to zero: Λ ( X k − 1 ) = 0. {\displaystyle \Lambda (X_{k}^{-1})=0.} Let j {\displaystyle j} be any integer such that 1 ≤ j ≤ ν {\displaystyle 1\leq j\leq \nu } . Multiply both sides by Y k X k j + ν {\displaystyle Y_{k}X_{k}^{j+\nu }} , and it will still be zero: Y k X k j + ν Λ ( X k − 1 ) = 0 , Y k X k j + ν ( 1 + Λ 1 X k − 1 + Λ 2 X k − 2 + ⋯ + Λ ν X k − ν ) = 0 , Y k X k j + ν + Λ 1 Y k X k j + ν X k − 1 + Λ 2 Y k X k j + ν X k − 2 + ⋯ + Λ ν Y k X k j + ν X k − ν = 0 , Y k X k j + ν + Λ 1 Y k X k j + ν − 1 + Λ 2 Y k X k j + ν − 2 + ⋯ + Λ ν Y k X k j = 0. {\displaystyle {\begin{aligned}&Y_{k}X_{k}^{j+\nu }\Lambda (X_{k}^{-1})=0,\\&Y_{k}X_{k}^{j+\nu }(1+\Lambda _{1}X_{k}^{-1}+\Lambda _{2}X_{k}^{-2}+\cdots +\Lambda _{\nu }X_{k}^{-\nu })=0,\\&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu }X_{k}^{-1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu }X_{k}^{-2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j+\nu }X_{k}^{-\nu }=0,\\&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j}=0.\end{aligned}}} Sum for k = 1 to ν , and it will still be zero: ∑ k = 1 ν ( Y k X k j + ν + Λ 1 Y k X k j + ν − 1 + Λ 2 Y k X k j + ν − 2 + ⋯ + Λ ν Y k X k j ) = 0. {\displaystyle \sum _{k=1}^{\nu }(Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j})=0.} Collect each term into its own sum: ( ∑ k = 1 ν Y k X k j + ν ) + ( ∑ k = 1 ν Λ 1 Y k X k j + ν − 1 ) + ( ∑ k = 1 ν Λ 2 Y k X k j + ν − 2 ) + ⋯ + ( ∑ k = 1 ν Λ ν Y k X k j ) = 0. {\displaystyle \left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\left(\sum _{k=1}^{\nu }\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}\right)+\left(\sum _{k=1}^{\nu }\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\left(\sum _{k=1}^{\nu }\Lambda _{\nu }Y_{k}X_{k}^{j}\right)=0.} Extract the constant values of Λ {\displaystyle \Lambda } that are unaffected by the summation: ( ∑ k = 1 ν Y k X k j + ν ) + Λ 1 ( ∑ k = 1 ν Y k X k j + ν − 1 ) + Λ 2 ( ∑ k = 1 ν Y k X k j + ν − 2 ) + ⋯ + Λ ν ( ∑ k = 1 ν Y k X k j ) = 0. {\displaystyle \left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\Lambda _{1}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -1}\right)+\Lambda _{2}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\Lambda _{\nu }\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}\right)=0.} These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces to S j + ν + Λ 1 S j + ν − 1 + ⋯ + Λ ν − 1 S j + 1 + Λ ν S j = 0. {\displaystyle S_{j+\nu }+\Lambda _{1}S_{j+\nu -1}+\cdots +\Lambda _{\nu -1}S_{j+1}+\Lambda _{\nu }S_{j}=0.} Subtracting S j + ν {\displaystyle S_{j+\nu }} from both sides yields S j Λ ν + S j + 1 Λ ν − 1 + ⋯ + S j + ν − 1 Λ 1 = − S j + ν . {\displaystyle S_{j}\Lambda _{\nu }+S_{j+1}\Lambda _{\nu -1}+\cdots +S_{j+\nu -1}\Lambda _{1}=-S_{j+\nu }.} Recall that j was chosen to be any integer between 1 and v inclusive, and this equivalence is true for all such values. Therefore, we have v linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λ i of the error-location polynomial: [ S 1 S 2 ⋯ S ν S 2 S 3 ⋯ S ν + 1 ⋮ ⋮ ⋱ ⋮ S ν S ν + 1 ⋯ S 2 ν − 1 ] [ Λ ν Λ ν − 1 ⋮ Λ 1 ] = [ − S ν + 1 − S ν + 2 ⋮ − S ν + ν ] . {\displaystyle {\begin{bmatrix}S_{1}&S_{2}&\cdots &S_{\nu }\\S_{2}&S_{3}&\cdots &S_{\nu +1}\\\vdots &\vdots &\ddots &\vdots \\S_{\nu }&S_{\nu +1}&\cdots &S_{2\nu -1}\end{bmatrix}}{\begin{bmatrix}\Lambda _{\nu }\\\Lambda _{\nu -1}\\\vdots \\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-S_{\nu +1}\\-S_{\nu +2}\\\vdots \\-S_{\nu +\nu }\end{bmatrix}}.} The above assumes that the decoder knows the number of errors ν , but that number has not been determined yet. The PGZ decoder does not determine ν directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ν and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined. [ 15 ] Use the coefficients Λ i found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators X k are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators X k {\displaystyle X_{k}} (not their reciprocals X k − 1 {\displaystyle X_{k}^{-1}} ). Chien search is an efficient implementation of this step. Once the error locators X k are known, the error values can be determined. This can be done by direct solution for Y k in the error equations matrix given above, or using the Forney algorithm . Calculate i k by taking the log base α {\displaystyle \alpha } of X k . This is generally done using a precomputed lookup table. Finally, e ( x ) is generated from i k and e i k and then is subtracted from r ( x ) to get the originally sent message s ( x ), with errors corrected. Consider the Reed–Solomon code defined in GF (929) with α = 3 and t = 4 (this is used in PDF417 barcodes) for a RS(7,3) code. The generator polynomial is g ( x ) = ( x − 3 ) ( x − 3 2 ) ( x − 3 3 ) ( x − 3 4 ) = x 4 + 809 x 3 + 723 x 2 + 568 x + 522. {\displaystyle g(x)=(x-3)(x-3^{2})(x-3^{3})(x-3^{4})=x^{4}+809x^{3}+723x^{2}+568x+522.} If the message polynomial is p ( x ) = 3 x 2 + 2 x + 1 , then a systematic codeword is encoded as follows: s r ( x ) = p ( x ) x t mod g ( x ) = 547 x 3 + 738 x 2 + 442 x + 455 , {\displaystyle s_{r}(x)=p(x)\,x^{t}{\bmod {g}}(x)=547x^{3}+738x^{2}+442x+455,} s ( x ) = p ( x ) x t − s r ( x ) = 3 x 6 + 2 x 5 + 1 x 4 + 382 x 3 + 191 x 2 + 487 x + 474. {\displaystyle s(x)=p(x)\,x^{t}-s_{r}(x)=3x^{6}+2x^{5}+1x^{4}+382x^{3}+191x^{2}+487x+474.} Errors in transmission might cause this to be received instead: r ( x ) = s ( x ) + e ( x ) = 3 x 6 + 2 x 5 + 123 x 4 + 456 x 3 + 191 x 2 + 487 x + 474. {\displaystyle r(x)=s(x)+e(x)=3x^{6}+2x^{5}+123x^{4}+456x^{3}+191x^{2}+487x+474.} The syndromes are calculated by evaluating r at powers of α : S 1 = r ( 3 1 ) = 3 ⋅ 3 6 + 2 ⋅ 3 5 + 123 ⋅ 3 4 + 456 ⋅ 3 3 + 191 ⋅ 3 2 + 487 ⋅ 3 + 474 = 732 , {\displaystyle S_{1}=r(3^{1})=3\cdot 3^{6}+2\cdot 3^{5}+123\cdot 3^{4}+456\cdot 3^{3}+191\cdot 3^{2}+487\cdot 3+474=732,} S 2 = r ( 3 2 ) = 637 , S 3 = r ( 3 3 ) = 762 , S 4 = r ( 3 4 ) = 925 , {\displaystyle S_{2}=r(3^{2})=637,\quad S_{3}=r(3^{3})=762,\quad S_{4}=r(3^{4})=925,} yielding the system [ 732 637 637 762 ] [ Λ 2 Λ 1 ] = [ − 762 − 925 ] = [ 167 004 ] . {\displaystyle {\begin{bmatrix}732&637\\637&762\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-762\\-925\end{bmatrix}}={\begin{bmatrix}167\\004\end{bmatrix}}.} Using Gaussian elimination , [ 001 000 000 001 ] [ Λ 2 Λ 1 ] = [ 329 821 ] , {\displaystyle {\begin{bmatrix}001&000\\000&001\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}329\\821\end{bmatrix}},} so Λ ( x ) = 329 x 2 + 821 x + 001 , {\displaystyle \Lambda (x)=329x^{2}+821x+001,} with roots x 1 = 757 = 3 −3 and x 2 = 562 = 3 −4 . The coefficients can be reversed: R ( x ) = 001 x 2 + 821 x + 329 , {\displaystyle R(x)=001x^{2}+821x+329,} to produce roots 27 = 3 3 and 81 = 3 4 with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword). To calculate the error values, apply the Forney algorithm : Ω ( x ) = S ( x ) Λ ( x ) mod x 4 = 546 x + 732 , {\displaystyle \Omega (x)=S(x)\Lambda (x){\bmod {x}}^{4}=546x+732,} Λ ′ ( x ) = 658 x + 821 , {\displaystyle \Lambda '(x)=658x+821,} e 1 = − Ω ( x 1 ) / Λ ′ ( x 1 ) = 074 , {\displaystyle e_{1}=-\Omega (x_{1})/\Lambda '(x_{1})=074,} e 2 = − Ω ( x 2 ) / Λ ′ ( x 2 ) = 122. {\displaystyle e_{2}=-\Omega (x_{2})/\Lambda '(x_{2})=122.} Subtracting e 1 x 3 + e 2 x 4 = 74 x 3 + 122 x 4 {\displaystyle e_{1}x^{3}+e_{2}x^{4}=74x^{3}+122x^{4}} from the received polynomial r ( x ) reproduces the original codeword s . The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ( x ) with an assumed number of errors e : Δ = S i + Λ 1 S i − 1 + ⋯ + Λ e S i − e {\displaystyle \Delta =S_{i}+\Lambda _{1}\ S_{i-1}+\cdots +\Lambda _{e}\ S_{i-e}} and then adjusts Λ( x ) and e so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, C ( x ) is used to represent Λ( x ). Using the same data as the Peterson Gorenstein Zierler example above: The final value of C is the error locator polynomial, Λ( x ). Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the extended Euclidean algorithm . Define S ( x ), Λ( x ), and Ω( x ) for t syndromes and e errors: S ( x ) = S t x t − 1 + S t − 1 x t − 2 + ⋯ + S 2 x + S 1 Λ ( x ) = Λ e x e + Λ e − 1 x e − 1 + ⋯ + Λ 1 x + 1 Ω ( x ) = Ω e x e + Ω e − 1 x e − 1 + ⋯ + Ω 1 x + Ω 0 {\displaystyle {\begin{aligned}S(x)&=S_{t}x^{t-1}+S_{t-1}x^{t-2}+\cdots +S_{2}x+S_{1}\\[1ex]\Lambda (x)&=\Lambda _{e}x^{e}+\Lambda _{e-1}x^{e-1}+\cdots +\Lambda _{1}x+1\\[1ex]\Omega (x)&=\Omega _{e}x^{e}+\Omega _{e-1}x^{e-1}+\cdots +\Omega _{1}x+\Omega _{0}\end{aligned}}} The key equation is: Λ ( x ) S ( x ) = Q ( x ) x t + Ω ( x ) {\displaystyle \Lambda (x)S(x)=Q(x)x^{t}+\Omega (x)} For t = 6 and e = 3: [ Λ 3 S 6 x 8 Λ 2 S 6 + Λ 3 S 5 x 7 Λ 1 S 6 + Λ 2 S 5 + Λ 3 S 4 x 6 S 6 + Λ 1 S 5 + Λ 2 S 4 + Λ 3 S 3 x 5 S 5 + Λ 1 S 4 + Λ 2 S 3 + Λ 3 S 2 x 4 S 4 + Λ 1 S 3 + Λ 2 S 2 + Λ 3 S 1 x 3 S 3 + Λ 1 S 2 + Λ 2 S 1 x 2 S 2 + Λ 1 S 1 x S 1 ] = [ Q 2 x 8 Q 1 x 7 Q 0 x 6 0 0 0 Ω 2 x 2 Ω 1 x Ω 0 ] {\displaystyle {\begin{bmatrix}\Lambda _{3}S_{6}&x^{8}\\\Lambda _{2}S_{6}+\Lambda _{3}S_{5}&x^{7}\\\Lambda _{1}S_{6}+\Lambda _{2}S_{5}+\Lambda _{3}S_{4}&x^{6}\\S_{6}+\Lambda _{1}S_{5}+\Lambda _{2}S_{4}+\Lambda _{3}S_{3}&x^{5}\\S_{5}+\Lambda _{1}S_{4}+\Lambda _{2}S_{3}+\Lambda _{3}S_{2}&x^{4}\\S_{4}+\Lambda _{1}S_{3}+\Lambda _{2}S_{2}+\Lambda _{3}S_{1}&x^{3}\\S_{3}+\Lambda _{1}S_{2}+\Lambda _{2}S_{1}&x^{2}\\S_{2}+\Lambda _{1}S_{1}&x\\S_{1}\end{bmatrix}}={\begin{bmatrix}Q_{2}x^{8}\\Q_{1}x^{7}\\Q_{0}x^{6}\\0\\0\\0\\\Omega _{2}x^{2}\\\Omega _{1}x\\\Omega _{0}\end{bmatrix}}} The middle terms are zero due to the relationship between Λ and syndromes. The extended Euclidean algorithm can find a series of polynomials of the form where the degree of R decreases as i increases. Once the degree of R i ( x ) < t /2, then B ( x ) and Q ( x ) don't need to be saved, so the algorithm becomes: to set low order term of Λ( x ) to 1, divide Λ( x ) and Ω( x ) by A i (0): A i (0) is the constant (low order) term of A i . Using the same data as the Peterson–Gorenstein–Zierler example above: A discrete Fourier transform can be used for decoding. [ 16 ] To avoid conflict with syndrome names, let c ( x ) = s ( x ) the encoded codeword. r ( x ) and e ( x ) are the same as above. Define C ( x ), E ( x ), and R ( x ) as the discrete Fourier transforms of c ( x ), e ( x ), and r ( x ). Since r ( x ) = c ( x ) + e ( x ), and since a discrete Fourier transform is a linear operator, R ( x ) = C ( x ) + E ( x ). Transform r ( x ) to R ( x ) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, t coefficients of R ( x ) and E ( x ) are the same as the syndromes: R j = E j = S j = r ( α j ) for 1 ≤ j ≤ t {\displaystyle R_{j}=E_{j}=S_{j}=r(\alpha ^{j})\qquad {\text{for }}1\leq j\leq t} Use R 1 {\displaystyle R_{1}} through R t {\displaystyle R_{t}} as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders. Let v = number of errors. Generate E ( x ) using the known coefficients E 1 {\displaystyle E_{1}} to E t {\displaystyle E_{t}} , the error locator polynomial, and these formulas E 0 = − 1 Λ v ( E v + Λ 1 E v − 1 + ⋯ + Λ v − 1 E 1 ) E j = − ( Λ 1 E j − 1 + Λ 2 E j − 2 + ⋯ + Λ v E j − v ) for t < j < n {\displaystyle {\begin{aligned}E_{0}&=-{\frac {1}{\Lambda _{v}}}(E_{v}+\Lambda _{1}E_{v-1}+\cdots +\Lambda _{v-1}E_{1})\\E_{j}&=-(\Lambda _{1}E_{j-1}+\Lambda _{2}E_{j-2}+\cdots +\Lambda _{v}E_{j-v})&{\text{for }}t<j<n\end{aligned}}} Then calculate C ( x ) = R ( x ) − E ( x ) and take the inverse transform (polynomial interpolation) of C ( x ) to produce c ( x ). The Singleton bound states that the minimum distance d of a linear block code of size ( n , k ) is upper-bounded by n - k + 1 . The distance d was usually understood to limit the error-correction capability to ⌊( d - 1) / 2⌋ . The Reed–Solomon code achieves this bound with equality, and can thus correct up to ⌊( n - k ) / 2⌋ errors. However, this error-correction bound is not exact. In 1999, Madhu Sudan and Venkatesan Guruswami at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. [ 17 ] It applies to Reed–Solomon codes and more generally to algebraic geometric codes . This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over GF (2 m ) and its extensions. In 2023, building on three exciting [ according to whom? ] works, [ 18 ] [ 19 ] [ 20 ] coding theorists showed that Reed-Solomon codes defined over random evaluation points can actually achieve list decoding capacity (up to n - k errors) over linear size alphabets with high probability. However, this result is combinatorial rather than algorithmic. [ citation needed ] The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel demodulator 's confidence in the correctness of the symbol. The advent of LDPC and turbo codes , which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the theoretical limit , has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and Alexander Vardy presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami. [ 21 ] In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder. [ 22 ] Here we present a simple MATLAB implementation for an encoder. Now the decoding part: The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message. Reed and Solomon described a theoretical decoder that corrected errors by finding the most popular message polynomial. [ 1 ] The decoder only knows the set of values a 1 {\displaystyle a_{1}} to a n {\displaystyle a_{n}} and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is the binomial coefficient , ( n k ) = n ! ( n − k ) ! k ! {\textstyle {\binom {n}{k}}={n! \over (n-k)!k!}} , and the number of subsets is infeasible for even modest codes. For a (255,249) code that can correct 3 errors, the naïve theoretical decoder would examine 359 billion subsets. [ citation needed ] In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity O ( n 3 ) , where n is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message. Using RS(7,3), GF(929), and the set of evaluation points a i = i − 1 If the message polynomial is The codeword is Errors in transmission might cause this to be received instead. The key equation is: Assume maximum number of errors: e = 2 . The key equation becomes: [ 001 000 928 000 000 000 000 006 006 928 928 928 928 928 123 246 928 927 925 921 913 456 439 928 926 920 902 848 057 228 928 925 913 865 673 086 430 928 924 904 804 304 121 726 928 923 893 713 562 ] [ e 0 e 1 q 0 q 1 q 2 q 3 q 4 ] = [ 000 923 437 541 017 637 289 ] {\displaystyle {\begin{bmatrix}001&000&928&000&000&000&000\\006&006&928&928&928&928&928\\123&246&928&927&925&921&913\\456&439&928&926&920&902&848\\057&228&928&925&913&865&673\\086&430&928&924&904&804&304\\121&726&928&923&893&713&562\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}000\\923\\437\\541\\017\\637\\289\end{bmatrix}}} Using Gaussian elimination : [ 001 000 000 000 000 000 000 000 001 000 000 000 000 000 000 000 001 000 000 000 000 000 000 000 001 000 000 000 000 000 000 000 001 000 000 000 000 000 000 000 001 000 000 000 000 000 000 000 001 ] [ e 0 e 1 q 0 q 1 q 2 q 3 q 4 ] = [ 006 924 006 007 009 916 003 ] {\displaystyle {\begin{bmatrix}001&000&000&000&000&000&000\\000&001&000&000&000&000&000\\000&000&001&000&000&000&000\\000&000&000&001&000&000&000\\000&000&000&000&001&000&000\\000&000&000&000&000&001&000\\000&000&000&000&000&000&001\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}006\\924\\006\\007\\009\\916\\003\end{bmatrix}}} Recalculate P ( x ) where E ( x ) = 0 : {2, 3} to correct b resulting in the corrected codeword: In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm. [ 23 ] To duplicate the polynomials generated by Berlekamp Welsh, divide Q ( x ) and E ( x ) by most significant coefficient of E ( x ) = 708. Recalculate P ( x ) where E ( x ) = 0 : {2, 3} to correct b resulting in the corrected codeword:
https://en.wikipedia.org/wiki/Reed–Solomon_error_correction
Reef Ball Foundation, Inc. is a 501(c)(3) non-profit organization that functions as an international environmental non-governmental organization . The foundation uses reef ball artificial reef technology, combined with coral propagation, transplant technology, public education, and community training to build, restore and protect coral reefs . The foundation has established "reef ball reefs" in 59 countries. Over 550,000 reef balls have been deployed in more than 4,000 projects. Reef Ball Development Group was founded in 1993 by Todd Barber, with the goal of helping to preserve and protect coral reefs for the benefit of future generations. [ 2 ] Barber witnessed his favorite coral reef on Grand Cayman destroyed by Hurricane Gilbert , and wanted to do something to help increase the resiliency of eroding coral reefs. Barber and his father patented [ 3 ] the idea of building reef substrate modules with a central inflatable bladder, so that the modules would be buoyant , making them easy to deploy by hand or with a small boat, rather than requiring heavy machinery. Over the next few years, with the help of research colleagues at University of Georgia , Nationwide Artificial Reef Coordinators and the Florida Institute of Technology (FIT), Barber, his colleagues, and business partners worked to perfect the design. In 1997, Kathy Kirbo established The Reef Ball Foundation, Inc as a non-profit organization with original founders being [ 4 ] Todd Barber as chairman and charter member, Kathy Kirbo founding executive director, board secretary, and charter member, Larry Beggs as vice president and a charter member and Eric Krasle as treasurer and a charter member, Jay Jorgensen as a charter member. Reef balls can be found in almost every coastal state in the United States, and on every continent including Antarctica. [ 5 ] The foundation has expanded the scope of its projects to include coral rescue, propagation and transplant operations, beach restorations, mangrove restorations and nursery development. Reef Ball also participates in education and outreach regarding environmental stewardship and coral reefs. In 2001, Reef Ball Foundation took control of the Reef Ball Development Group, and operates all aspects of the business as a non-profit organization. By 2007, the foundation has deployed 550,000 reef balls worldwide. [ 6 ] In 2019, Reef Ball Foundation deployed 1,400 reef balls in the shores of Progreso, Yucatán in Mexico. Artificial reefs were also built in Quintana Roo , Baja California , Colima , Veracruz , and Campache . Almost 25,000 reef balls have been established in the surrounding seas of Mexico. [ 7 ] The Reef Ball Foundation manufactures reef balls for open ocean deployment in sizes from 0.3 to 2.5 metres (1 to 8 ft) in diameter and 15 to 3,500 kilograms (30 to 8,000 lb) in weight. Reef balls are hollow, and typically have several convex-concave holes of varying sizes to most closely approximate natural coral reef conditions by creating whirlpools . Reef balls are made from pH -balanced microsilica concrete, and are treated to create a rough surface texture, in order to promote settling by marine organisms such as corals , algae , coralline algae and sponges . Over the last decade, research has been conducted with respect to the ability of artificial reefs to produce or attract biomass , [ 8 ] the effectiveness of reef balls in replicating natural habitat, [ 9 ] [ 10 ] and mitigating disasters. [ 11 ] The use of reef balls as breakwaters and for beach stabilization has been extensively studied. [ 12 ] [ 13 ] [ 14 ] The foundation undertakes an array of projects including artificial reef deployment, estuary restoration, mangrove plantings, oyster reef creation, coral propagation, natural disaster recovery, erosion control, and education. Notable projects include: The trend in artificial reef development has been toward the construction of designed artificial reefs, built from materials specifically designed to function as reefs. Designed systems (such as reef balls) can be modified to achieve a variety of goals. These include coral reef rehabilitation , fishery enhancement, snorkeling and diving trails, beach erosion protection, surfing enhancement, fish spawning sites, planters for mangrove replanting, enhancement of lobster fisheries, creation of oyster reefs, estuary rehabilitation , and even exotic uses such as deep water Oculina coral replanting. Designed systems can overcome many of the problems associated with "materials of opportunity" such as stability in storms , durability, biological fit, lack of potential pollution problems, availability, and reduction in long-term artificial reef costs. Designed reefs have been developed specifically for coral reef rehabilitation, and can therefore be used in a more specific niche than materials of opportunity. Some examples of specialized adaptations which "designed reefs" can use include: specialized surface textures, coral planting attachment points, specialized pH-neutral surfaces (such as neutralized concrete, ceramics, or mineral accretion surfaces), fissures to create currents for corals, and avoidance of materials such as iron (which may cause algae to overgrow coral). [ citation needed ] Other types of designed systems can create aquaculture opportunities for lobsters, create oyster beds, or be used for a large variety of other specialized needs.
https://en.wikipedia.org/wiki/Reef_Ball_Foundation
Reef Life Survey is a marine life monitoring programme [ 1 ] based in Hobart , Tasmania . It is international in scope, but predominantly Australian , as a large proportion of the volunteers are Australian. Most of the surveys are done by volunteer recreational divers , collecting biodiversity data for marine conservation . The database is available to marine ecology researchers, and is used by several marine protected area managements in Australia, [ 2 ] [ 3 ] New Zealand, American Samoa and the eastern Pacific. Reef Life Survey provides data to improve biodiversity conservation and the sustainable management of marine resources. They collect and curate biodiversity information at spatial and temporal scales beyond those possible by most scientific dive teams which have to work with limited resources, by using volunteer recreational divers trained in the RLS survey procedures. [ 1 ] [ 4 ] The University of Tasmania houses and manages the RLS database, and the data is freely available to the public for non-profit purposes through public outputs, including their website. Reef Life Survey was started by researchers at the University of Tasmania and initially funded by the Commonwealth Environment Research Facilities (CERF) Program. This program is the core activity of the Reef Life Survey Foundation Incorporated – a not for profit Australian organisation. [ 1 ] Reef Life Survey includes a volunteer network of recreational scuba divers , trained in the relevant skills, and an Advisory Committee. The advisory committee is made up of managers and scientists who use the collected data, and representatives of the recreational diver network. [ 5 ] Standard survey procedures are used matched to a variety of habitat topographies , and using simple equipment - waterproof clipboard with records sheet, underwater camera , and 50m surveyor's tape measure. The surveys are typically repeated at irregular intervals at listed sites, identified by GPS location, transect depth and direction, and are usually conducted as a pair of transects in opposite directions from the nominal position, at approximately constant depth. Data collected includes fish counts by visual census in a 5m x 5m corridor on both sides of the transect line (Method 1), mobile invertebrate counts in a 1m corridor on both sides of the line (Method 2), and photo-quadrats at 2.5m intervals along the 50m transect line. Manufactured debris may also be recorded. Off transect observations of interest are recorded separately (Method 0). Numbers and size class are recorded for fish, just numbers for most invertebrates. [ 6 ] Since 2006, divers have collected data for RLS from over 44 countries. As of September 2015, more than 4500 species have been recorded from over 7000 surveys. [ citation needed ] A circumnavigation of Australia by volunteer citizen scientists aboard the sailing catamaran Reef Dragon left Port Davey , Tasmania, on February 16, 2013, on an counterclockwise journey around the continent of Australia and ended in February 2014 in Prince of Wales Bay , Hobart. During the voyage a marine baseline of reef biodiversity for the new Coral Sea Commonwealth Marine Reserve network was established. [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Reef_Life_Survey
A reeler is a mouse mutant , so named because of its characteristic "reeling" gait . This is caused by the profound underdevelopment of the mouse's cerebellum , a segment of the brain responsible for locomotion . The mutation is autosomal and recessive , and prevents the typical cerebellar folia from forming. Cortical neurons are generated normally but are abnormally placed, resulting in disorganization of cortical laminar layers in the central nervous system . The reason is the lack of reelin , an extracellular matrix glycoprotein , which, during the corticogenesis, is secreted mainly by the Cajal–Retzius cells . In the reeler neocortex, cortical plate neurons are aligned in a practically inverted fashion ("outside-in"). In the ventricular zone of the cortex fewer neurons have been found to have radial glial processes. [ 1 ] In the dentate gyrus of hippocampus , no characteristic radial glial scaffold is formed and no compact granule cell layer is established. [ 2 ] Therefore, the reeler mouse presents a good model in which to investigate the mechanisms of establishment of the precise neuronal network during development. There are two types of the reeler mutation: In order to unravel the reelin signaling chain, attempts are made to cut the signal downstream of reelin, leaving reelin expression intact but creating the reeler phenotype , sometimes a partial phenotype, thus confirming the role of downstream molecules. The examples include: Heterozygous reeler mice, also known as HRM , while lacking the apparent phenotype seen in the homozygous reeler, also show some brain abnormalities due to the reelin deficit. Heterozygous (rl/+) mice express reelin at 50% of wild-type levels and have grossly normal brains but exhibit a progressive loss during aging of a neuronal target of reelin action, Purkinje cells . [ 15 ] The mice have reduced density of parvalbumin -containing interneurons in circumscribed regions of striatum , according to one study. [ 16 ] Studies reveal a 16% deficit in the number of Purkinje cells in 3-month-old (+/rl) and a 24% one in 16-month-old animals: surprisingly this deficit is only present in the (+/rl) males, while the females are spared. First mention of reeler mouse mutation dates back to 1951. [ 17 ] In the later years, histopathological studies revealed that the reeler cerebellum is dramatically decreased in size and the normal laminar organization found in several brain regions is disrupted (Hamburgh, 1960). In 1995, the RELN gene and reelin protein were discovered at chromosome 7q22 by Tom Curran and colleagues. [ 18 ]
https://en.wikipedia.org/wiki/Reeler
In physics , reentrant superconductivity is an effect observed in systems that lie close to the boundary between ferromagnetic and superconducting . By its very nature (normal) superconductivity (condensation of electrons into the BCS ground state ) cannot exist together with ferromagnetism (condensation of electrons into the same spin state, all pointing in the same direction). Reentrance is when while changing a continuous parameter, superconductivity is first observed, then destroyed by the ferromagnetic order, and later reappears. An example is the changing of the thickness of the ferromagnetic layer in a bilayer of a superconductor and a ferromagnet. At a certain thickness superconductivity is destroyed by the Andreev reflected electrons in the ferromagnet, but if the thickness increases, this effect disappears again. Another example are materials with a Curie temperature below the superconducting transition temperature. When cooling, first superconducting order appears in the electron system. Cooling further, the ferromagnetic order energetically wins over the superconducting order in the electron system. At even lower energy superconductivity reenters, and a nonuniform magnetic order appears. there is ferromagnetic order on short length scales, but superconducting order on large length scales. Uranium ditelluride , (UTe 2 ) a spin-triplet superconductor. [ 1 ] Discovered to be a superconductor in 2018. [ 2 ]
https://en.wikipedia.org/wiki/Reentrant_superconductivity
The Re-referenced Protein Chemical shift Database (RefDB) [ 1 ] is an NMR spectroscopy database of carefully corrected or re-referenced chemical shifts , derived from the BioMagResBank (BMRB) (Fig. 1). The database was assembled by using a structure-based chemical shift calculation program (called SHIFTX) to calculate expected protein (1)H, (13)C and (15)N chemical shifts from X-ray or NMR coordinate data of previously assigned proteins reported in the BMRB. The comparison is automatically performed by a program called SHIFTCOR . The RefDB database currently provides reference-corrected chemical shift data on more than 2000 assigned peptides and proteins . Data from the database indicates that nearly 25% of BMRB entries with (13)C protein assignments and 27% of BMRB entries with (15)N protein assignments require significant chemical shift reference readjustments. Additionally, nearly 40% of protein entries deposited in the BioMagResBank appear to have at least one assignment error. Users may download, search or browse the database through a number of methods available through the RefDB website. RefDB provides a standard chemical shift resource for biomolecular NMR spectroscopists, wishing to derive or compute chemical shift trends in peptides and proteins . All data in RefDB is non-proprietary or is derived from a non-proprietary source. It is freely accessible and available to anyone. In addition, nearly every data item is fully traceable and explicitly referenced to the original source. RefDB data is available through a public web interface and downloads. All chemical shifts in RefDB have been computationally re-referenced to DSS (a common NMR chemical shift standard). [ 1 ] RefDB is a continuously updated resource that uses web-bots to query public databases (BMRB, GenBank , Protein Data Bank ) and fetch assignment, sequence and structure data on a weekly basis. It then applies a series of data checking routines (using keywords to remove paramagnetic or denatured proteins) followed by a series of calculations to identify and correct chemical shift referencing errors. RefDB is fully web-enabled database, it stores data in two standard formats ( NMR-STAR and Shifty), it performs automated data updating, checking and validation and it provides open access to output data in a fully downloadable flat file format as well as in a hyperlinked browsable table (see Fig. 2). RefDB also supports keyword queries and sequence searches (using local BLAST ). RefDB is usually updated on a weekly basis. The RefDB database, along with its associated software, is freely available at http://refdb.wishartlab.com and at the BMRB website. [ 1 ] RefDB has been prepared using a combination of three different computer programs. The first program (SHIFTX) calculates backbone 1H, 13C and 15N chemical shifts from protein 3D coordinate data. The second program ( SHIFTCOR ) compares the calculated shifts with the observed shifts, evaluates any statistically significant differences and performs the necessary chemical shift corrections. The third program (UPDATE) automatically retrieves newly deposited BMRB data along with any corresponding PDB data. UPDATE also directs the data to SHIFTCOR and appends the ‘corrected’ chemical shift file to the RefDB database. [ 1 ]
https://en.wikipedia.org/wiki/RefDB_(chemistry)
A reference is a relationship between objects in which one object designates, or acts as a means by which to connect to or link to, another object. The first object in this relation is said to refer to the second object. It is called a name for the second object. The next object, the one to which the first object refers, is called the referent of the first object. A name is usually a phrase or expression, or some other symbolic representation . Its referent may be anything – a material object, a person, an event, an activity, or an abstract concept. References can take on many forms, including: a thought, a sensory perception that is audible ( onomatopoeia ), visual (text), olfactory , or tactile, emotional state , relationship with other, [ 1 ] spacetime coordinates, symbolic or alpha-numeric , a physical object, or an energy projection. In some cases, methods are used that intentionally hide the reference from some observers, as in cryptography . [ citation needed ] References feature in many spheres of human activity and knowledge, and the term adopts shades of meaning particular to the contexts in which it is used. Some of them are described in the sections below. The word reference is derived from Middle English referren , from Middle French référer , from Latin referre , "to carry back", formed from the prefix re - and ferre , "to bear". [ 2 ] A number of words derive from the same root, including refer , referee , referential , referent , referendum . The verb refer (to) and its derivatives may carry the sense of "connect to" or "link to", as in the meanings of reference described in this article. Another sense is "consult"; this is reflected in such expressions as reference work , reference desk , job reference , etc. In semantics , reference is generally construed as the relationships between nouns or pronouns and objects that are named by them. Hence, the word "John" refers to the person John. The word "it" refers to some previously specified object. The object referred to is called the referent of the word. [ 3 ] Sometimes the word-object relation is called " denotation "; the word denotes the object. The converse relation, the relation from object to word, is called " exemplification "; the object exemplifies what the word denotes. In syntactic analysis, if a word refers to a previous word, the previous word is called the " antecedent ". Gottlob Frege argued that reference cannot be treated as identical with meaning : " Hesperus " (an ancient Greek name for the evening star) and " Phosphorus " (an ancient Greek name for the morning star) both refer to Venus , but the astronomical fact that '"Hesperus" is "Phosphorus"' can still be informative, even if the "meanings" of "Hesperus" and "Phosphorus" are already known. This problem led Frege to distinguish between the sense and reference of a word. [ citation needed ] The very concept of the linguistic sign is the combination of content and expression, the former of which may refer entities in the world or refer more abstract concepts, e.g. thought. Certain parts of speech exist only to express reference, namely anaphora such as pronouns . The subset of reflexives expresses co-reference of two participants in a sentence. These could be the agent (actor) and patient (acted on), as in "The man washed himself", the theme and recipient, as in "I showed Mary to herself", or various other possible combinations. In computer science , references are data types that refer to an object elsewhere in memory and are used to construct a wide variety of data structures , such as linked lists . Generally, a reference is a value that enables a program to directly access the particular data item. Most programming languages support some form of reference. For the specific type of reference used in the C++ language, see reference (C++) . The notion of reference is also important in relational database theory ; see referential integrity . References to many types of printed matter may come in an electronic or machine-readable form. For books, there exists the ISBN and for journal articles, the Digital object identifier (DOI) is gaining relevance. Information on the Internet may be referred to by a Uniform Resource Identifier (URI) . In terms of mental processing, a self-reference is used in psychology to establish identification with a mental state during self-analysis. This seeks to allow the individual to develop own frames of reference in a greater state of immediate awareness. However, it can also lead to circular reasoning , preventing evolution of thought. [ 4 ] According to Perceptual Control Theory (PCT), a reference condition is the state toward which a control system's output tends to alter a controlled quantity. The main proposition is that "All behavior is oriented all of the time around the control of certain quantities with respect to specific reference conditions." [ 5 ] In academics and scholarship, a reference or bibliographical reference is a piece of information provided in a footnote or bibliography of a written work such as a book, article, essay, report, oration or any other text type , specifying the written work of another person used in the creation of that text. A bibliographical reference mostly includes the full name of the author , the title of their work and the year of publication. The primary purpose of references is to allow readers to examine the sources of a text, either for validity or to learn more about the subject. Such items are often listed at the end of a work in a section marked References or Bibliography . References are particularly important as for the use of citations , since copying of material by another author without proper reference and / or without required permissions is considered plagiarism , and may be tantamount to copyright infringement , which can be subject to legal proceedings . A reference section contains only those works indeed cited in the main text of a work. In contrast, a bibliographical section often contains works not cited by the author, but used as background reading or listed as potentially useful to the reader. Keeping a diary allows an individual to use references for personal organization, whether or not anyone else understands the systems of reference used. However, scholars have studied methods of reference because of their key role in communication and co-operation between different people, and also because of misunderstandings that can arise. Modern academic study of bibliographical references has been developing since the 19th century. [ 6 ] In patent law, a reference is a document that can be used to show the state of knowledge at a given time and that therefore may make a claimed invention obvious or anticipated . Examples of references are patents of any country, magazine articles, Ph.D. theses that are indexed and thus accessible to those interested in finding information about the subject matter, and to some extent Internet material that is similarly accessible. In art , a reference is an item from which a work is based. This may include: Another example of reference is samples of various musical works being incorporated into a new one.
https://en.wikipedia.org/wiki/Reference
Reference Signal Received Power ( RSRP ) is a measure of the received power level in an LTE cell network. The average power is a measure of the power received from a single reference signal. [ 1 ] RSRP is used to measure the coverage of the LTE cell on the DL. The UE will send RRC measurement reports that include RSRP values in a binned format. The reporting range of RSRP is defined from −140 to −44 dBm with 1 dB resolution. The main purpose of RSRP is to determine the best cell on the DL radio interface and select this cell as the serving cell for either initial random access or intra-LTE handover. The RRC measurement reports with RSRP measurement results will be sent by the UE if a predefined event trigger criterion is met. RSRP Measurements, Ranges, DT Graph, Values, Reference Signal Concept. This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reference_Signal_Received_Power
A reference card , also known as a reference sheet , quick reference card , crib sheet or job aid , is a concise bundling of condensed notes about a specific topic, such as mathematical formulas [ 1 ] to calculate area/volume, or common syntactic rules and idioms of a particular computer platform , application program , or formal language . It serves as an ad hoc memory aid for an experienced user. [ 2 ] In spite of what the name reference card may suggest, such as a 3x5 index card (8 cm × 13 cm), the term also applies to sheets of paper or online pages, as in the context of programming languages or markup languages . However, this concept is now being adopted to portray concise information in many other fields. As in the examples below, reference cards are typically one to a few pages in length. Pages are organized into one or more columns. Across the columns, the reference is split into sections and organized by topic. Each section contains a list of entries, with each entry containing a term and its description and usage information. Terms might include keywords, syntactic constructs, functions, methods, or macros in a computer language. In a reference card for a program with a graphical user interface, terms may include menu entries, icons or key combinations representing program actions. Due to its logical structure and conciseness, finding information in a reference card is trivial for humans and requires no computer interaction. It is therefore convenient for a user to print out a reference card. While reference cards can be printed on card stock , it is common to print them on ordinary printer paper. With the advent of portable electronic devices that can display documents, digital reference cards stored in PDF or HTML formats have become more common. This is in contrast to user guides , which tend to be rather long and verbose and which have (in comparison to reference cards) a lower information density This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reference_card
In statistics , the reference class problem is the problem of deciding what class to use when calculating the probability applicable to a particular case. For example, to estimate the probability of an aircraft crashing, we could refer to the frequency of crashes among various different sets of aircraft: all aircraft, this make of aircraft, aircraft flown by this company in the last ten years, etc. In this example, the aircraft for which we wish to calculate the probability of a crash is a member of many different classes, in which the frequency of crashes differs. It is not obvious which class we should refer to for this aircraft. In general, any case is a member of very many classes among which the frequency of the attribute of interest differs. The reference class problem discusses which class is the most appropriate to use. More formally, many arguments in statistics take the form of a statistical syllogism : F {\displaystyle F} is called the "reference class" and G {\displaystyle G} is the "attribute class" and I {\displaystyle I} is the individual object. How is one to choose an appropriate class F {\displaystyle F} ? In Bayesian statistics , the problem arises as that of deciding on a prior probability for the outcome in question (or when considering multiple outcomes, a prior probability distribution). John Venn stated in 1876 that "every single thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite number of different classes of things", leading to problems with how to assign probabilities to a single case. He used as an example the probability that John Smith, a consumptive Englishman aged fifty, will live to sixty-one. [ 1 ] The name "problem of the reference class" was given by Hans Reichenbach , who wrote, "If we are asked to find the probability holding for an individual future event, we must first incorporate the event into a suitable reference class. An individual thing or event may be incorporated in many reference classes, from which different probabilities will result." [ 2 ] There has also been discussion of the reference class problem in philosophy [ 3 ] and in the life sciences , e.g., clinical trial prediction. [ 4 ] Applying Bayesian probability in practice involves assessing a prior probability which is then applied to a likelihood function and updated through the use of Bayes' theorem . Suppose we wish to assess the probability of guilt of a defendant in a court case in which DNA (or other probabilistic) evidence is available. We first need to assess the prior probability of guilt of the defendant. We could say that the crime occurred in a city of 1,000,000 people, of whom 15% meet the requirements of being the same sex, age group and approximate description as the perpetrator. That suggests a prior probability of guilt of 1 in 150,000. We could cast the net wider and say that there is, say, a 25% chance that the perpetrator is from out of town, but still from this country, and construct a different prior estimate. We could say that the perpetrator could come from anywhere in the world, and so on. Legal theorists have discussed the reference class problem particularly with reference to the Shonubi case. Charles Shonubi , a Nigerian drug smuggler, was arrested at JFK Airport on Dec 10, 1991, and convicted of heroin importation. The severity of his sentence depended not only on the amount of drugs on that trip, but the total amount of drugs he was estimated to have imported on seven previous occasions on which he was not caught. Five separate legal cases debated how that amount should be estimated. In one case, "Shonubi III", the prosecution presented statistical evidence of the amount of drugs found on Nigerian drug smugglers caught at JFK Airport in the period between Shonubi's first and last trips. There has been debate over whether that is the (or a) correct reference class to use, and if so, why. [ 5 ] [ 6 ] Other legal applications involve valuation. For example, houses might be valued using the data in a database of house sales of "similar" houses. To decide on which houses are similar to a given one, one needs to know which features of a house are relevant to price. Number of bathrooms might be relevant, but not the eye color of the owner. It has been argued that such reference class problems can be solved by finding which features are relevant: a feature is relevant to house price if house price covaries with it (it affects the likelihood that the house has a higher or lower value), and the ideal reference class for an individual is the set of all instances which share with it all relevant features. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Reference_class_problem
A reference designator unambiguously identifies the location of a component within an electrical schematic or on a printed circuit board . The reference designator usually consists of one or two letters followed by a number, e.g. C3, D1, R4, U15. The number is sometimes followed by a letter, indicating that components are grouped or matched with each other, e.g. R17A, R17B. The IEEE 315 standard contains a list of Class Designation Letters to use for electrical and electronic assemblies. For example, the letter R is a reference prefix for the resistors of an assembly, C for capacitors, K for relays. Industrial electrical installations often use reference designators according to IEC 81346 . IEEE 200-1975 or "Standard Reference Designations for Electrical and Electronics Parts and Equipments" is a standard that was used to define referencing naming systems for collections of electronic equipment. IEEE 200 was ratified in 1975. The IEEE renewed the standard in the 1990s, but withdrew it from active support shortly thereafter. This document also has an ANSI document number, ANSI Y32.16-1975. This standard codified information from, among other sources, a United States military standard MIL-STD-16 which dates back to at least the 1950s in American industry. To replace IEEE 200-1975, ASME , a standards body for mechanical engineers, initiated the new standard ASME Y14.44-2008. This standard, along with IEEE 315-1975, provide the electrical designer with guidance on how to properly reference and annotate everything from a single circuit board to a collection of complete enclosures. ASME Y14.44-2008 [ 1 ] and IEEE 315-1975 [ 2 ] define how to reference and annotate components of electronic devices. It breaks down a system into units, and then any number of sub-assemblies. The unit is the highest level of demarcation in a system and is always a numeral. Subsequent demarcation are called assemblies and always have the Class Letter "A" as a prefix following by a sequential number starting with 1. Any number of sub-assemblies may be defined until finally reaching the component. Note that IEEE 315-1975 [ 2 ] defines separate class designation letters for separable assemblies (class designation 'A') and inseparable assemblies (class designation 'U'). Inseparable assemblies—i.e., "items which are ordinarily replaced as a single item of supply" [ 2 ] —are typically treated as components in this referencing scheme. Examples: Especially valuable is the method of referencing and annotating cables plus their connectors within and outside assemblies. Examples: A cable connecting these two might be: Connectors on this cable would be designated: ASME Y14.44-2008 continues the convention of Plug P and Jack J when assigning references for electrical connectors in assemblies where a J (or jack ) is the more fixed and P (or plug) is the less fixed of a connector pair, without regard to the gender of the connector contacts. The construction of reference designators is covered by IEEE 200-1975/ANSI Y32.16-1975 [ 3 ] (replaced by ASME Y14.44-2008 [ 1 ] ) and IEEE 315-1975. [ 2 ] The table below lists designators commonly used, and does not necessarily comply with standards. For modern use, designators are often simplified towards shorter designators, because it requires less space on silkscreens.
https://en.wikipedia.org/wiki/Reference_designator
A reference dose is the United States Environmental Protection Agency 's maximum acceptable oral dose of a toxic substance, "below which no adverse noncancer health effects should result from a lifetime of exposure". Reference doses have been most commonly determined for pesticides . The EPA defines an oral reference dose (abbreviated RfD ) as: [A]n estimate, with uncertainty spanning perhaps an order of magnitude , of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime. [ 1 ] The United States Environmental Protection Agency defines a reference dose (abbreviated RfD) as the maximum acceptable oral dose of a toxic substance, below which no adverse non cancerous health effects should result from a lifetime of exposure. It is an estimate, with uncertainty spanning perhaps an order of magnitude , of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime. [ 1 ] RfDs are not enforceable standards, unlike National Ambient Air Quality Standards . RfDs are risk assessment benchmarks, and the EPA tries to set other regulations, so that people are not exposed to chemicals in amounts that exceed RfDs. According to the EPA from 2008, "[a]n aggregate daily exposure to a [chemical] at or below the RfD (expressed as 100 percent or less of the RfD) is generally considered acceptable by EPA." [ 2 ] States can set their own RfDs. For example, the EPA set an acute RfD for children of 0.0015 mg/kg/day for the organochlorine insecticide endosulfan , based on neurological effects observed in test animals. The EPA then looked at dietary exposure to endosulfan, and found that for the most exposed 0.1 % of children age 1–6, their daily consumption of the endosulfan exceeded this RfD. To remedy this, the EPA revoked the use of endosulfan on the crops that contributed the most to exposure of children: certain beans, peas, spinach, and grapes. [ 3 ] Reference doses are chemical-specific, i.e. the EPA determines a unique reference dose for every substance it evaluates. Often separate acute (0-1 month)and chronic RfDs (more than one month) are determined for the same substance. Reference doses are specific to dietary exposure. When assessing inhalation exposure, EPA uses "reference concentrations" ( RfC s), instead of RfDs. Note that RfDs apply only to non-cancer effects. When evaluating carcinogenic effects, the EPA uses the Q 1 * method. RfDs are usually derived from animal studies. Animals (typically rats ) are dosed with varying amounts of the substance in question, and the largest dose at which no effects are observed is identified. This dose level is called the No observable effect level , or NOEL. To account for the fact that humans may be more or less susceptible than the test animal, a 10-fold "uncertainty factor" is usually applied to the NOEL. This uncertainty factor is called the " interspecies uncertainty factor" or UF inter . An additional 10-fold uncertainty factor, the "intraspecies uncertainty factor" or UF intra , is usually applied to account for the fact that some humans may be substantially more sensitive to the effects of substances than others. Additional uncertainty factors may also be applied. In general: Frequently, a " lowest-observed-adverse-effect level " or LOAEL is used in place of a NOEL. If adverse effects are observed at all dose levels tested, then the smallest dose tested, the LOAEL, is used to calculate the RfD. An additional uncertainty factor usually applied in these cases, since the NOAEL, by definition, would be lower than the LOAEL had it been observed. If studies using human subjects are used to determine a RfD, then the interspecies uncertainty factor can be reduced to 1, but generally the 10-fold intraspecies uncertainty factor is retained. Such studies are rare. As an example, consider the following determination of the RfD for the insecticide chlorpyrifos , adapted from the EPA's Interim Reregistration Eligibility Decision for chlorpyrifos. [ 4 ] The EPA determined the acute RfD to be 0.005 mg/kg/day based on a study in which male rats were administered a one-time dose of chlorpyrifos and blood cholinesterase activity was monitored. Cholinesterase inhibition was observed at all dose levels tested, the lowest of which was 1.5 mg/kg. This level was thus identified at the lowest observed adverse effect level (LOAEL). A NOAEL of 0.5 mg/kg was estimated by dividing the LOAEL by a three-fold uncertainty factor. The NOAEL was then divided by the standard 10-fold inter- and 10-fold intraspecies uncertainty factors to arrive at the RfD of 0.005 mg/kg/day. Other studies showed that fetuses and children are even more sensitive to chlorpyrifos than adults, so the EPA applies an additional ten-fold uncertainty factor to protect that subpopulation. A RfD that has been divided by an additional uncertainty factor that only applies to certain populations is called a "population adjusted dose" or PAD. For chlorpyrifos, the acute PAD (or "aPAD") is thus 5×10 −4 mg/kg/day, and it applies to infants, children, and women who are breast feeding. The EPA also determined a chronic RfD for chlorpyrifos exposure based on studies in which animals were administered low doses of the pesticide for two years. Cholinesterase inhibition was observed at all dose levels tested, and a NOAEL of 0.03 mg/kg/day estimated by dividing a LOAEL of 0.3 mg/kg/day by an uncertainty factor of 10. As with the acute RfD, the chronic RfD of 3×10 −4 mg/kg/day was determined by dividing this NOAEL by the inter- and intraspecies uncertainty factors. The chronic PAD ("cPAD") of 3×10 −5 mg/kg/day was determined by applying an additional 10-fold uncertainty factor to account for the increased susceptibility of infants and children. Like the aPAD, this cPAD applies to infants, children, and breast feeding women. Because the RfD assumes "a dose below which no adverse noncarcinogenic health effects should result from a lifetime of exposure", [ 5 ] the critical step in all chemical risk and regulatory threshold calculations is dependent upon a properly derived dose at which no observed adverse effects (NOAEL) were seen which is then divided by an uncertainty factor that considers inadequacies of the study, animal-to-human extrapolation, sensitive sub-populations, and inadequacies of the database. The RfD that is derived is not always agreed upon. Some may believe it to be overly protective while others may contend that it is not adequately protective of human health. For example, in 2002 the EPA completed its draft toxicological review of perchlorate and proposed an RfD of 0.00003 milligrams per kilogram per day (mg/kg/day) based primarily on studies that identified neurodevelopmental deficits in rat pups. These deficits were linked to maternal exposure to perchlorate. Subsequently, the National Academy of Sciences (NAS) reviewed the health implications of perchlorate, and in 2005 proposed a much higher alternative reference dose of 0.0007 mg/kg/day based primarily on a 2002 study by Greer et al. [ 6 ] During that study, 37 adult human subjects were split into four exposure groups exposed to 0.007 (7 subjects), 0.02 (10 subjects), 0.1 (10 subjects), and 0.5 (10 subjects) mg/kg/day. Significant decreases in iodide uptake were found in the three highest exposure groups. Iodide uptake was not significantly reduced in the lowest exposed group, but four of the seven subjects in this group experienced inhibited iodide uptake. In 2005, the RfD proposed by NAS was accepted by EPA and added to its integrated risk information system (IRIS). In a 2005 article in the journal Environmental Health Perspectives (EHP), Gary Ginsberg and Deborah Rice argued, that the 2005 NAS RfD was not protective of human health based on the following: [ 7 ] Although there has generally been consensus with the Greer et al. study, there is no consensus with regard to developing a perchlorate RfD. One of the key differences results from how the point of departure is viewed (i.e., NOEL or LOAEL), or whether a benchmark dose should be used to derive the RfD. Defining the point of departure as a NOEL or LOAEL has implications when it comes to applying appropriate safety factors to the point of departure to derive the RfD. [ 8 ] In 2010, the Massachusetts Department of Environmental Protection set a 10 fold lower RfD (0.07 μg/kg/day) using a much higher uncertainty factor of 100. They also calculated an Infant drinking water value, which neither US EPA nor CalEPA have done. [ 9 ]
https://en.wikipedia.org/wiki/Reference_dose
A reference ecosystem , also known as an ecological reference , is a " community of organisms able to act as a model or benchmark for restoration ." [ 1 ] [ 2 ] [ 3 ] Reference ecosystems usually include remnant natural areas that have not been degraded by human activities such as agriculture , logging , development , fire suppression, or non-native species invasion . Reference ecosystems are ideally complete with natural flora, fauna, abiotic elements, ecological functions, processes, and successional states. Multiple reference ecosystems may be pieced together to form the model upon which an ecological restoration project may be based. This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reference_ecosystem
An Earth ellipsoid or Earth spheroid is a mathematical figure approximating the Earth's form , used as a reference frame for computations in geodesy , astronomy , and the geosciences . Various different ellipsoids have been used as approximations. It is a spheroid (an ellipsoid of revolution ) whose minor axis (shorter diameter), which connects the geographical North Pole and South Pole , is approximately aligned with the Earth's axis of rotation. The ellipsoid is defined by the equatorial axis ( a ) and the polar axis ( b ); their radial difference is slightly more than 21 km, or 0.335% of a (which is not quite 6,400 km). Many methods exist for determination of the axes of an Earth ellipsoid, ranging from meridian arcs up to modern satellite geodesy or the analysis and interconnection of continental geodetic networks . Amongst the different set of data used in national surveys are several of special importance: the Bessel ellipsoid of 1841, the international Hayford ellipsoid of 1924, and (for GPS positioning) the WGS84 ellipsoid. There are two types of ellipsoid: mean and reference. A data set which describes the global average of the Earth's surface curvature is called the mean Earth Ellipsoid . It refers to a theoretical coherence between the geographic latitude and the meridional curvature of the geoid . The latter is close to the mean sea level , and therefore an ideal Earth ellipsoid has the same volume as the geoid. While the mean Earth ellipsoid is the ideal basis of global geodesy, for regional networks a so-called reference ellipsoid may be the better choice. [ 1 ] When geodetic measurements have to be computed on a mathematical reference surface, this surface should have a similar curvature as the regional geoid; otherwise, reduction of the measurements will get small distortions. This is the reason for the "long life" of former reference ellipsoids like the Hayford or the Bessel ellipsoid , despite the fact that their main axes deviate by several hundred meters from the modern values. Another reason is a judicial one: the coordinates of millions of boundary stones should remain fixed for a long period. If their reference surface changes, the coordinates themselves also change. However, for international networks, GPS positioning, or astronautics , these regional reasons are less relevant. As knowledge of the Earth's figure is increasingly accurate, the International Geoscientific Union IUGG usually adapts the axes of the Earth ellipsoid to the best available data. In geodesy , a reference ellipsoid is a mathematically defined surface that approximates the geoid , which is the truer, imperfect figure of the Earth , or other planetary body, as opposed to a perfect, smooth, and unaltered sphere, which factors in the undulations of the bodies' gravity due to variations in the composition and density of the interior , as well as the subsequent flattening caused by the centrifugal force from the rotation of these massive objects (for planetary bodies that do rotate). Because of their relative simplicity, reference ellipsoids are used as a preferred surface on which geodetic network computations are performed and point coordinates such as latitude , longitude , and elevation are defined. In the context of standardization and geographic applications, a geodesic reference ellipsoid is the mathematical model used as foundation by spatial reference system or geodetic datum definitions. In geophysics, geodesy , and related areas, the word 'ellipsoid' is understood to mean 'oblate ellipsoid of revolution', and the older term 'oblate spheroid' is hardly used. [ 2 ] [ 3 ] For bodies that cannot be well approximated by an ellipsoid of revolution a triaxial (or scalene) ellipsoid is used. The shape of an ellipsoid of revolution is determined by the shape parameters of that ellipse . The semi-major axis of the ellipse, a , becomes the equatorial radius of the ellipsoid: the semi-minor axis of the ellipse, b , becomes the distance from the centre to either pole. These two lengths completely specify the shape of the ellipsoid. In geodesy publications, however, it is common to specify the semi-major axis (equatorial radius) a and the flattening f , defined as: That is, f is the amount of flattening at each pole, relative to the radius at the equator. This is often expressed as a fraction 1/ m ; m = 1/ f then being the "inverse flattening". A great many other ellipse parameters are used in geodesy but they can all be related to one or two of the set a , b and f . A great many ellipsoids have been used to model the Earth in the past, with different assumed values of a and b as well as different assumed positions of the center and different axis orientations relative to the solid Earth. Starting in the late twentieth century, improved measurements of satellite orbits and star positions have provided extremely accurate determinations of the Earth's center of mass and of its axis of revolution; and those parameters have been adopted also for all modern reference ellipsoids. The ellipsoid WGS-84 , widely used for mapping and satellite navigation has f close to 1/300 (more precisely, 1/298.257223563, by definition), corresponding to a difference of the major and minor semi-axes of approximately 21 km (13 miles) (more precisely, 21.3846857548205 km). For comparison, Earth's Moon is even less elliptical, with a flattening of less than 1/825, while Jupiter is visibly oblate at about 1/15 and one of Saturn's triaxial moons, Telesto , is highly flattened, with f between 1/3 and 1/2 (meaning that the polar diameter is between 50% and 67% of the equatorial. Arc measurement is the historical method of determining the ellipsoid. Two meridian arc measurements will allow the derivation of two parameters required to specify a reference ellipsoid. For example, if the measurements were hypothetically performed exactly over the equator plane and either geographical pole, the radii of curvature so obtained would be related to the equatorial radius and the polar radius, respectively a and b (see: Earth polar and equatorial radius of curvature ). Then, the flattening would readily follow from its definition: For two arc measurements each at arbitrary average latitudes φ i {\displaystyle \varphi _{i}} , i = 1 , 2 {\displaystyle i=1,\,2} , the solution starts from an initial approximation for the equatorial radius a 0 {\displaystyle a_{0}} and for the flattening f 0 {\displaystyle f_{0}} . The theoretical Earth's meridional radius of curvature M 0 ( φ i ) {\displaystyle M_{0}(\varphi _{i})} can be calculated at the latitude of each arc measurement as: where e 0 2 = 2 f 0 − f 0 2 {\displaystyle e_{0}^{2}=2f_{0}-f_{0}^{2}} . [ 4 ] Then discrepancies between empirical and theoretical values of the radius of curvature can be formed as δ M i = M i − M 0 ( φ i ) {\displaystyle \delta M_{i}=M_{i}-M_{0}(\varphi _{i})} . Finally, corrections for the initial equatorial radius δ a {\displaystyle \delta a} and the flattening δ f {\displaystyle \delta f} can be solved by means of a system of linear equations formulated via linearization of M {\displaystyle M} : [ 5 ] where the partial derivatives are: [ 5 ] Longer arcs with multiple intermediate-latitude determinations can completely determine the ellipsoid that best fits the surveyed region. In practice, multiple arc measurements are used to determine the ellipsoid parameters by the method of least squares adjustment . The parameters determined are usually the semi-major axis, a {\displaystyle a} , and any of the semi-minor axis, b {\displaystyle b} , flattening , or eccentricity. Regional-scale systematic effects observed in the radius of curvature measurements reflect the geoid undulation and the deflection of the vertical , as explored in astrogeodetic leveling . Gravimetry is another technique for determining Earth's flattening, as per Clairaut's theorem . Modern geodesy no longer uses simple meridian arcs or ground triangulation networks, but the methods of satellite geodesy , especially satellite gravimetry . Geodetic coordinates are a type of curvilinear orthogonal coordinate system used in geodesy based on a reference ellipsoid . They include geodetic latitude (north/south) ϕ , longitude (east/west) λ , and ellipsoidal height h (also known as geodetic height [ 6 ] ). In 1687 Isaac Newton published the Principia in which he included a proof that a rotating self-gravitating fluid body in equilibrium takes the form of a flattened ("oblate") ellipsoid of revolution, generated by an ellipse rotated around its minor diameter; a shape which he termed an oblate spheroid . [ 8 ] [ 9 ] In 1669, Jean Picard found the first accurate and reliable value for the radius of Earth as 6,365.6 kilometres . [ 10 ] [ 11 ] Picard's geodetic observations had been confined to the determination of the magnitude of the Earth considered as a sphere, but the discovery made by Jean Richer turned the attention of mathematicians to the Earth's deviation from a spherical form. [ 12 ] [ 13 ] Christiaan Huygens found out the centrifugal force which explained variations of gravitational acceleration depending on latitude . [ 14 ] In 1743, Alexis Clairaut proposed a theorem which suggested that the study of variations in gravitational acceleration was a way to determine the figure of the Earth , whose crucial parameter was the flattening of the Earth ellipsoid. [ 14 ] [ 11 ] Towards the end of the 18th century, the geodesists sought to reconcile the values of flattening drawn from the measurements of meridian arcs with that given by Clairaut's theorem drawn from the measurement of gravity . [ 11 ] [ 15 ] The Weights and Measures Commission would, in 1799, adopt a flattening of ⁠ 1 / 334 ⁠ based on analysis by Pierre-Simon Laplace who combined the arc of Peru and the data of the meridian arc of Delambre and Méchain . [ 16 ] : 3 [ 16 ] [ 17 ] The reference ellipsoid models listed below have had utility in geodetic work and many are still in use. The older ellipsoids are named for the individual who derived them and the year of development is given. In 1887 the English surveyor Colonel Alexander Ross Clarke CB FRS RE was awarded the Gold Medal of the Royal Society for his work in determining the figure of the Earth. The international ellipsoid was developed by John Fillmore Hayford in 1910 and adopted by the International Union of Geodesy and Geophysics (IUGG) in 1924, which recommended it for international use. At the 1967 meeting of the IUGG held in Lucerne, Switzerland, the ellipsoid called GRS-67 ( Geodetic Reference System 1967) in the listing was recommended for adoption. The new ellipsoid was not recommended to replace the International Ellipsoid (1924), but was advocated for use where a greater degree of accuracy is required. It became a part of the GRS-67 which was approved and adopted at the 1971 meeting of the IUGG held in Moscow. It is used in Australia for the Australian Geodetic Datum and in the South American Datum 1969. The GRS-80 (Geodetic Reference System 1980) as approved and adopted by the IUGG at its Canberra, Australia meeting of 1979 is based on the equatorial radius (semi-major axis of Earth ellipsoid) a {\displaystyle a} , total mass G M {\displaystyle GM} , dynamic form factor J 2 {\displaystyle J_{2}} and angular velocity of rotation ω {\displaystyle \omega } , making the inverse flattening 1 / f {\displaystyle 1/f} a derived quantity. The minute difference in 1 / f {\displaystyle 1/f} seen between GRS-80 and WGS-84 results from an unintentional truncation in the latter's defining constants: while the WGS-84 was designed to adhere closely to the GRS-80, incidentally the WGS-84 derived flattening turned out to differ slightly from the GRS-80 flattening because the normalized second degree zonal harmonic gravitational coefficient, that was derived from the GRS-80 value for J 2 {\displaystyle J_{2}} , was truncated to eight significant digits in the normalization process. [ 18 ] An ellipsoidal model describes only the ellipsoid's geometry and a normal gravity field formula to go with it. Commonly an ellipsoidal model is part of a more encompassing geodetic datum . For example, the older ED-50 ( European Datum 1950 ) is based on the Hayford or International Ellipsoid . WGS-84 is peculiar in that the same name is used for both the complete geodetic reference system and its component ellipsoidal model. Nevertheless, the two concepts—ellipsoidal model and geodetic reference system—remain distinct. Note that the same ellipsoid may be known by different names. It is best to mention the defining constants for unambiguous identification.
https://en.wikipedia.org/wiki/Reference_ellipsoid
A reference genome (also known as a reference assembly ) is a digital nucleic acid sequence database, assembled by scientists as a representative example of the set of genes in one idealized individual organism of a species. As they are assembled from the sequencing of DNA from a number of individual donors, reference genomes do not accurately represent the set of genes of any single individual organism. Instead, a reference provides a haploid mosaic of different DNA sequences from each donor. For example, one of the most recent human reference genomes, assembly GRCh38/hg38 , is derived from >60 genomic clone libraries . [ 1 ] There are reference genomes for multiple species of viruses , bacteria , fungus , plants , and animals . Reference genomes are typically used as a guide on which new genomes are built, enabling them to be assembled much more quickly and cheaply than the initial Human Genome Project . Reference genomes can be accessed online at several locations, using dedicated browsers such as Ensembl or UCSC Genome Browser . [ 2 ] The length of a genome can be measured in multiple different ways. A simple way to measure genome length is to count the number of base pairs in the assembly. [ 3 ] The golden path is an alternative measure of length that omits redundant regions such as haplotypes and pseudo autosomal regions . [ 4 ] [ 5 ] It is usually constructed by layering sequencing information over a physical map to combine scaffold information. It is a 'best estimate' of what the genome will look like and typically includes gaps, making it longer than the typical base pair assembly. [ 6 ] Reference genomes assembly requires reads overlapping, creating contigs , which are contiguous DNA regions of consensus sequences . [ 7 ] If there are gaps between contigs, these can be filled by scaffolding , either by contigs amplification with PCR and sequencing or by Bacterial Artificial Chromosome (BAC) cloning. [ 8 ] [ 7 ] Filling these gaps is not always possible, in this case multiple scaffolds are created in a reference assembly. [ 9 ] Scaffolds are classified in 3 types: 1) Placed, whose chromosome, genomic coordinates and orientations are known; 2) Unlocalised, when only the chromosome is known but not the coordinates or orientation; 3) Unplaced, whose chromosome is not known. [ 10 ] The number of contigs and scaffolds , as well as their average lengths are relevant parameters, among many others, for a reference genome assembly quality assessment since they provide information about the continuity of the final mapping from the original genome. The smaller the number of scaffolds per chromosome, until a single scaffold occupies an entire chromosome, the greater the continuity of the genome assembly. [ 11 ] [ 12 ] [ 13 ] Other related parameters are N50 and L50 . N50 is the length of the contigs/scaffolds in which the 50% of the assembly is found in fragments of this length or greater, while L50 is the number of contigs/scaffolds whose length is N50. The higher the value of N50, the lower the value of L50, and vice versa, indicating high continuity in the assembly. [ 14 ] [ 15 ] [ 16 ] The human and mouse reference genomes are maintained and improved by the Genome Reference Consortium (GRC), a group of fewer than 20 scientists from a number of genome research institutes, including the European Bioinformatics Institute , the National Center for Biotechnology Information , the Sanger Institute and McDonnell Genome Institute at Washington University in St. Louis . GRC continues to improve reference genomes by building new alignments that contain fewer gaps, and fixing misrepresentations in the sequence. The original human reference genome was derived from thirteen anonymous volunteers from Buffalo, New York . Donors were recruited by advertisement in The Buffalo News , on Sunday, March 23, 1997. The first ten male and ten female volunteers were invited to make an appointment with the project's genetic counselors and donate blood from which DNA was extracted. As a result of how the DNA samples were processed, about 80 percent of the reference genome came from eight people and one male, designated RP11 , accounts for 66 percent of the total. The ABO blood group system differs among humans, but the human reference genome contains only an O allele , although the others are annotated . [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] As the cost of DNA sequencing falls, and new full genome sequencing technologies emerge, more genome sequences continue to be generated. In several cases people such as James D. Watson had their genome assembled using massive parallel DNA sequencing . [ 22 ] [ 23 ] Comparison between the reference (assembly NCBI36/hg18) and Watson's genome revealed 3.3  million single nucleotide polymorphism differences, while about 1.4 percent of his DNA could not be matched to the reference genome at all. [ 21 ] [ 22 ] For regions where there is known to be large-scale variation, sets of alternate loci are assembled alongside the reference locus. The latest human reference genome assembly, released by the Genome Reference Consortium , was GRCh38 in 2017. [ 25 ] Several patches were added to update it, the latest patch being GRCh38.p14, published on the 3rd of February 2022. [ 26 ] [ 27 ] This build only has 349 gaps across the entire assembly, which implies a great improvement in comparison with the first version, which had roughly 150,000 gaps. [ 18 ] The gaps are mostly in areas such as telomeres , centromeres , and long repetitive sequences , with the biggest gap along the long arm of the Y chromosome, a region of ~30 Mb in length (~52% of the Y chromosome's length). [ 28 ] The number of genomic clone libraries contributing to the reference has increased steadily to >60 over the years, although individual RP11 still accounts for 70% of the reference genome. [ 1 ] Genomic analysis of this anonymous male suggests that he is of African-European ancestry. [ 1 ] According to the GRC website, their next assembly release for the human genome (version GRCh39) is currently "indefinitely postponed". [ 29 ] In 2022, the Telomere-to-Telomere (T2T) Consortium, [ 30 ] an open, community-based effort, published the first completely assembled reference genome (version T2T-CHM13), without any gaps in the assembly. It did not contain a Y-chromosome until version 2.0. [ 31 ] [ 32 ] This assembly allows for the examination of centromeric and pericentromeric sequence evolution. The consortium employed rigorous methods to assemble, clean, and validate complex repeat regions which are particularly difficult to sequence. [ 33 ] It used ultra-long–read (>100 kb) sequencing to accurately sequence segmental duplications . [ 34 ] The T2T-CHM13 is sequenced from CHM13hTERT, a cell line from an essentially haploid hydatidiform mole . "CHM" stands for "Complete Hydatidiform Mole," and "13" is its line number. "hTERT" stands for "human Telomerase Reverse Transcriptase ". The cell line has been transfected with the TERT gene, which is responsible for maintaining telomere length and thus contributes to the cell line's immortality . [ 35 ] A hydatidiform mole contains two copies of the same parental genome, and thus is essentially haploid. This eliminates allelic variation and allows better sequencing accuracy. [ 34 ] Recent genome assemblies are as follows: [ 36 ] For much of a genome, the reference provides a good approximation of the DNA of any single individual. But in regions with high allelic diversity , such as the major histocompatibility complex in humans and the major urinary proteins of mice, the reference genome may differ significantly from other individuals. [ 37 ] [ 38 ] [ 39 ] Due to the fact that the reference genome is a "single" distinct sequence, which gives its utility as an index or locator of genomic features, there are limitations in terms of how faithfully it represents the human genome and its variability . Most of the initial samples used for reference genome sequencing came from people of European ancestry. In 2010, it was found that, by de novo assembling genomes from African and Asian populations with the NCBI reference genome (version NCBI36), these genomes had ~5Mb sequences that did not align against any region of the reference genome. [ 40 ] Following projects to the Human Genome Project seek to address a deeper and more diverse characerization of the human genetic variability, which the reference genome is not able to represent. The HapMap Project , active during the period 2002 -2010, with the purpose of creating a haplotypes map and their most common variations among different human populations. Up to 11 populations of different ancestry were studied, such as individuals of the Han ethnic group from China, Gujaratis from India, the Yoruba people from Nigeria or Japanese people , among others. [ 41 ] [ 42 ] [ 43 ] [ 44 ] The 1000 Genomes Project , carried out between 2008 and 2015, with the aim of creating a database that includes more than 95% of the variations present in the human genome and whose results can be used in studies of association with diseases ( GWAS ) such as diabetes, cardiovascular or autoimmune diseases. A total of 26 ethnic groups were studied in this project, expanding the scope of the HapMap project to new ethnic groups such as the Mende people of Sierra Leone, the Vietnamese people or the Bengali people . [ 45 ] [ 46 ] [ 47 ] [ 48 ] The Human Pangenome Project , which started its initial phase in 2019 with the creation of the Human Pangenome Reference Consortium, seeks to create the largest map of human genetic variability taking the results of previous studies as a starting point. [ 49 ] [ 50 ] Recent mouse genome assemblies are as follows: [ 36 ] Since the Human Genome Project was finished, multiple international projects have started, focused on assembling reference genomes for many organisms. Model organisms (e.g., zebrafish ( Danio rerio ), chicken ( Gallus gallus ), Escherichia coli etc.) are of special interest to the scientific community, as well as, for example, endangered species (e.g., Asian arowana ( Scleropages formosus ) or the American bison ( Bison bison )). As of August 2022, the NCBI database supports 71 886 partially or completely sequenced and assembled genomes from different species, such as 676 mammals , 590 birds and 865 fishes . Also noteworthy are the numbers of 1796 insects genomes, 3747 fungi , 1025 plants , 33 724 bacteria , 26 004 virus and 2040 archaea . [ 51 ] A lot of these species have annotation data associated with their reference genomes that can be publicly accessed and visuali zed in genome browsers such as Ensembl and UCSC Genome Browser . [ 52 ] [ 53 ] Some examples of these international projects are: the Chimpanzee Genome Project , carried out between 2005 and 2013 jointly by the Broad Institute and the McDonnell Genome Institute of Washington University in St. Louis , which generated the first reference genomes for 4 subspecies of Pan troglodytes ; [ 54 ] [ 55 ] the 100K Pathogen Genome Project , which started in 2012 with the main goal of creating a database of reference genomes for 100 000 pathogen microorganisms to use in public health, outbreaks detection, agriculture and environment; [ 56 ] the Earth BioGenome Project , which started in 2018 and aims to sequence and catalog the genomes of all the eukaryotic organisms on Earth to promote biodiversity conservation projects. Inside this big-science project there are up to 50 smaller-scale affiliated projects such as the Africa BioGenome Project or the 1000 Fungal Genomes Project . [ 57 ] [ 58 ] [ 59 ]
https://en.wikipedia.org/wiki/Reference_genome
The reference mark or reference symbol " ※ " is a typographic mark or word used in Chinese , Japanese and Korean (CJK) writing. The symbol was used historically to call attention to an important sentence or idea, such as a prologue or footnote. [ 1 ] As an indicator of a note, the mark serves the same purpose as the asterisk in English. However, in Japanese usage, the note text is placed directly into the main text immediately after the reference mark, rather than at the bottom of the page or end of chapter as is the case in English writing. The Japanese name, komejirushi ( Japanese : こめじるし ; 米印 , pronounced [komedʑiꜜɾɯɕi] , lit. ' rice symbol ' ), refers to the symbol's visual similarity to the kanji for "rice" ( 米 ). [ 2 ] In Korean, the symbol's name, chamgopyo ( Korean : 참고표; 参考表 ), simply means "reference mark". Informally, the symbol is often called danggujangpyo ( 당구장표 ; lit. ' billiard hall mark ' ), as it is often used to indicate the presence of pool halls, due to its visual similarity to two crossed cue sticks and four billiard balls . In Chinese, the symbol is called cānkǎo biāojì ( Chinese : 参考标记 ; lit. 'reference mark') or mǐ xīnghào ( Chinese : 米星号 ; lit. 'rice asterisk' due to its visual similarity to 米 "rice"). It is not often used in Chinese writing. In Unicode , the symbol has code point U+203B ※ REFERENCE MARK .
https://en.wikipedia.org/wiki/Reference_mark
Isotopic reference materials are compounds ( solids , liquids , gasses ) with well-defined isotopic compositions and are the ultimate sources of accuracy in mass spectrometric measurements of isotope ratios . Isotopic references are used because mass spectrometers are highly fractionating . As a result, the isotopic ratio that the instrument measures can be very different from that in the sample's measurement. Moreover, the degree of instrument fractionation changes during measurement, often on a timescale shorter than the measurement's duration, and can depend on the characteristics of the sample itself . By measuring a material of known isotopic composition, fractionation within the mass spectrometer can be removed during post-measurement data processing . Without isotope references, measurements by mass spectrometry would be much less accurate and could not be used in comparisons across different analytical facilities. Due to their critical role in measuring isotope ratios, and in part, due to historical legacy, isotopic reference materials define the scales on which isotope ratios are reported in the peer-reviewed scientific literature. Isotope reference materials are generated, maintained, and sold by the International Atomic Energy Agency ( IAEA ), the National Institute of Standards and Technology ( NIST ), the United States Geologic Survey ( USGS ), the Institute for Reference Materials and Measurements ( IRMM ), and a variety of universities and scientific supply companies. Each of the major stable isotope systems ( hydrogen , carbon , oxygen , nitrogen , and sulfur ) has a wide variety of references encompassing distinct molecular structures. For example, nitrogen isotope reference materials include N-bearing molecules such ammonia (NH 3 ), atmospheric dinitrogen (N 2 ), and nitrate (NO 3 − ). Isotopic abundances are commonly reported using the δ notation, which is the ratio of two isotopes (R) in a sample relative to the same ratio in a reference material, often reported in per mille (‰) (equation below). Reference material span a wide range of isotopic compositions, including enrichments (positive δ) and depletions (negative δ). While the δ values of references are widely available, estimates of the absolute isotope ratios (R) in these materials are seldom reported. This article aggregates the δ and R values of common and non-traditional stable isotope reference materials. δ X = x / y R s a m p l e x / y R r e f e r e n c e − 1 {\displaystyle \delta ^{X}={\frac {^{x/y}R_{sample}}{^{x/y}R_{reference}}}-1} The δ values and absolute isotope ratios of common reference materials are summarized in Table 1 and described in more detail below. Alternative values for the absolute isotopic ratios of reference materials, differing only modestly from those in Table 1, are presented in Table 2.5 of Sharp (2007) [ 1 ] (a text freely available online ), as well as Table 1 of the 1993 IAEA report on isotopic reference materials. [ 2 ] For an exhaustive list of reference material, refer to Appendix I of Sharp (2007), [ 1 ] Table 40.1 of Gröning (2004), [ 3 ] or the website of the International Atomic Energy Agency . Note that the 13 C/ 12 C ratio of Vienna Pee Dee Belemnite (VPDB) and 34 S/ 32 S ratio of Vienna Canyon Diablo Troilite ( VCDT ) are purely mathematical constructs; neither material existed as a physical sample that could be measured. [ 2 ] R (σ) (R smp /R std -1) Calibration De Wit et al. (1980) [ 6 ] (see also Zhang et al. (1990) [ 8 ] ) VPDB was never a physical material. Li et al. (1988) [ 11 ] Li et al. (1988) [ 11 ] In Table 1, "Name" refers to the common name of the reference, "Material" gives its chemical formula and phase , "Type of ratio" is the isotopic ratio reported in "Isotopic ratio", "δ" is the δ value of the material with indicated reference frame, "Type" is the category of the material using the notation of Gröening (2004) (discussed below), "Citation" gives the article(s) reporting the isotopic abundances on which the isotope ratio is based, and "Notes" are notes. The reported isotopic ratios reflect the results from individual analyses of absolute mass fraction, aggregated in Meija et al. (2016) [ 14 ] and manipulated to reach the given ratios. Error was calculated as the square root of the sum of the squares of fractional reported errors, consistent with standard error propagation, but is not propagated for ratios reached through secondary calculation. The terminology of isotopic reference materials is not applied consistently across subfields of isotope geochemistry or even between individual laboratories . The terminology defined below comes from Gröening et al. (1999) [ 15 ] and Gröening (2004). [ 3 ] Reference materials are the basis for accuracy across many different types of measurement, not only the mass spectrometry, and there is a large body of literature concerned with the certification and testing of reference materials . Primary reference materials define the scales on which isotopic ratios are reported. This can mean a material that historically defined an isotopic scale, such as Vienna Standard Mean Ocean Water (VSMOW) for hydrogen isotopes , even if that material is not currently in use. Alternatively, it can mean a material that only ever existed theoretically but is used to define an isotopic scale, such as VCDT for sulfur isotope ratios. Calibration materials are compounds whose isotopic composition is known extremely well relative to the primary reference materials or which define the isotopic composition of the primary reference materials but are not the isotopic ratios to which data are reported in the scientific literature. For example, the calibration material IAEA-S-1 defines the isotopic scale for sulfur but measurements are reported relative to VCDT , not relative to IAEA-S-1. The calibration material serves the function of the primary reference material when the primary reference is exhausted, unavailable, or never existed in physical form. Reference materials are compounds which are carefully calibrated against the primary reference or a calibration material. These compounds allow for isotopic analysis of materials differing in chemical or isotopic composition from the compounds defining the isotopic scales on which measurements are reported. In general these are the materials most researchers mean when they say "reference materials". An example of a reference material is USGS-34, a KNO 3 salt with a δ 15 N of -1.8‰ vs. AIR . In this case the reference material has a mutually agreed upon value of δ 15 N when measured relative to the primary reference of atmospheric N 2 (Böhlke et al., 2003). [ 16 ] USGS-34 is useful because it allows researchers to directly measure the 15 N/ 14 N of NO 3 − in natural samples against the standard and report observations relative to N 2 without having to first convert the sample to N 2 gas. Primary, calibration, and reference materials are only available in small quantities and purchase is often limited to once every few years. Depending on the specific isotope systems and instrumentation, a shortage of available reference materials can be problematic for daily instrument calibrations or for researchers attempting to measure isotope ratios in a large number of natural samples. Rather than using primary materials or reference materials, a laboratory measuring stable isotope ratios will typically purchase a small quantity of the relevant reference materials and measure the isotope ratio of an in-house material against the reference , making that material into a working standard specific to that analytical facility. Once this lab-specific working standard has been calibrated to the international scale the standard is used to measure the isotopic composition of unknown samples. After measurement of both sample and working standard against a third material (commonly called the working gas or the transfer gas) the recorded isotopic distributions are mathematically corrected back to the international scale . It is thus critical to measure the isotopic composition of the working standard with high precision and accuracy (as well as possible given the precision of the instrument and the accuracy of the purchased reference material) because the working standard forms the ultimate basis for accuracy of most mass spectrometric observations. Unlike reference materials, working standards are typically not calibrated across multiple analytical facilities and the accepted δ value measured in a given laboratory could reflect bias specific to a single instrument. However, within a single analytical facility this bias can be removed during data reduction. Because each laboratory defines unique working standards the primary, calibration, and reference materials are long-lived while still ensuring that the isotopic composition of unknown samples can be compared across laboratories. The compounds used as isotopic references have a relatively complex history. The broad evolution of reference materials for the hydrogen , carbon , oxygen , and sulfur stable isotope systems are shown in Figure 1. Materials with red text define the primary reference commonly reported in scientific publications and materials with blue text are those available commercially. The hydrogen , carbon , and oxygen isotope scales are defined with two anchoring reference materials. For hydrogen the modern scale is defined by VSMOW2 and SLAP2, and is reported relative to VSMOW . For carbon the scale is defined by either NBS-19 or IAEA-603 depending on the age of the lab, as well as LSVEC, and is reported relative to VPDB. Oxygen isotope ratios can be reported relative to either the VSMOW or VPDB scales. The isotopic scales for sulfur and nitrogen are both defined for only a single anchoring reference material. For sulfur the scale is defined by IAEA-S-1 and is reported relative to VCDT, while for nitrogen the scale is both defined by and reported relative to AIR. The isotopic reference frame of Standard Mean Ocean Water (SMOW) was established by Harmon Craig in 1961 [ 17 ] by measuring δ 2 H and δ 18 O in samples of deep ocean water previously studied by Epstein & Mayeda (1953). [ 18 ] Originally SMOW was a purely theoretical isotope ratio intended to represent the mean state of the deep ocean. In the initial work the isotopic ratios of deep ocean water were measured relative to NBS-1, a standard derived from the steam condensate of Potomac River water. Notably, this means SMOW was originally defined relative to NBS-1, and there was no physical SMOW solution. Following the advice of an IAEA advisory group meeting in 1966, Ray Weiss and Harmon Craig made an actual solution with the isotopic values of SMOW which they called Vienna Standard Mean Ocean Water (VSMOW). [ 15 ] They also prepared a second hydrogen isotope reference material from firn collected at the Amundsen-Scott South Pole Station , initially called SNOW and later called Standard Light Antarctic Precipitation (SLAP). [ 2 ] Both VSMOW and SLAP were distributed beginning in 1968. The isotopic characteristics of SLAP and NBS-1 were later evaluated by interlaboratory comparison through measurements against VSMOW (Gonfiantini, 1978). [ 19 ] Subsequently, VSMOW and SLAP were used as the primary isotopic reference materials for the hydrogen isotope system for multiple decades. In 2006 the IAEA Isotope Hydrology Laboratory constructed new isotopic reference materials called VSMOW2 and SLAP2 with nearly identical δ 2 H and δ 18 O as VSMOW and SLAP. Hydrogen isotope working standards are currently calibrated against VSMOW2 and SLAP2 but are still reported on the scale defined by VSMOW and SLAP relative to VSMOW. Additionally, Greenland Ice Sheet Precipitation (GISP) δ 2 H has been measured to high precision in multiple labs, but different analytical facilities disagree on the value. These observations suggest GISP may have been fractionated during aliquoting or storage, implying that the reference material should be used with care. deviation The original carbon isotope reference material was a Belemnite fossil from the PeeDee Formation in South Carolina, known as the Pee Dee Belemnite (PDB). This PDB standard was rapidly consumed and subsequently researchers used replacement standards such as PDB II and PDB III. The carbon isotope reference frame was later established in Vienna against a hypothetical material called the Vienna Pee Dee Belemnite (VPDB). [ 2 ] As with the original SMOW, VPDB never existed as a physical solution or solid. In order to make measurements researchers use the reference material NBS-19, colloquially known as the Toilet Seat Limestone, [ 20 ] which has an isotopic ratio defined relative to the hypothetical VPDB . The exact origin of NBS-19 is unknown but it was a white marble slab and has a grain size of 200-300 micrometers . To improve the accuracy of carbon isotope measurements, in 2006 the δ 13 C scale was shifted from a one-point calibration against NBS-19 to a two point-calibration. In the new system the VPDB scale is pinned to both the LSVEC Li 2 CO 3 reference material and to the NBS-19 limestone (Coplen et al. , 2006a; Coplen et al., 2006b). [ 21 ] [ 22 ] NBS-19 is now also exhausted and has been replaced with IAEA-603. deviation Oxygen isotopic ratios are commonly compared to both the VSMOW and the VPDB references. Traditionally oxygen in water is reported relative to VSMOW while oxygen liberated from carbonate rocks or other geologic archives is reported relative to VPDB. As in the case of hydrogen, the oxygen isotopic scale is defined by two materials, VSMOW2 and SLAP2. Measurements of sample δ 18 O vs. VSMOW can be converted to the VPDB reference frame through the following equation: δ 18 O VPDB = 0.97001*δ 18 O VSMOW - 29.99‰ (Brand et al., 2014). [ 23 ] deviation Nitrogen gas (N 2 ) makes up 78% of the atmosphere and is extremely well mixed over short time-scales, resulting in a homogenous isotopic distribution ideal for use as a reference material. Atmospheric N 2 is commonly called AIR when being used as an isotopic reference. In addition to atmospheric N 2 there are multiple N isotopic reference materials. deviation 375.3‰ 373.0 - 377.6‰ SD given as 95% confidence interval 244.6‰ 243.9 - 245.4‰ SD given as 95% confidence interval The original sulfur isotopic reference material was the Canyon Diablo Troilite (CDT), a meteorite recovered from Meteor Crater in Arizona. The Canyon Diablo Meteorite was chosen because it was thought to have a sulfur isotopic composition similar to the bulk Earth . However, the meteorite was later found to be isotopically heterogeneous with variations up to 0.4‰. [ 13 ] This isotopic variability resulted in problems for the inter-laboratory calibration of sulfur isotope measurements. A meeting of the IAEA in 1993 defined Vienna Canyon Diablo Troilite (VCDT) in an allusion to the earlier establishment of VSMOW. Like the original SMOW and VPDB, VCDT was never a physical material that could be measured but was still used as the definition of the sulfur isotopic scale. For the purposes of actually measuring 34 S/ 32 S ratios, the IAEA defined the δ 34 S of IAEA-S-1 (originally called IAEA-NZ1) to be -0.30‰ relative to VCDT. [ 2 ] These changes to the sulfur isotope reference materials greatly improved inter-laboratory reproducibility. [ 24 ] deviation A recent international project has developed and determined the hydrogen , carbon , and nitrogen isotopic composition of 19 organic isotopic reference materials, now available from USGS , IAEA , and Indiana University . [ 25 ] These reference materials span a large range of δ 2 H (-210.8‰ to +397.0‰), δ 13 C (-40.81‰ to +0.49‰), and δ 15 N (-5.21‰ to +61.53‰), and are amenable to a wide range of analytical techniques . The organic reference materials include caffeine , glycine , n -hexadecane , icosanoic acid methyl ester (C 20 FAME), L-valine , methylheptadecanoate , polyethylene foil, polyethylene power, vacuum oil, and NBS-22. [ 25 ] The information in Table 7 comes directly from Table 2 of Schimmelmann et al . (2016). [ 25 ] Isotopic reference materials exist for non-traditional isotope systems (elements other than hydrogen , carbon , oxygen , nitrogen , and sulfur ), including lithium , boron , magnesium , calcium , iron , and many others. Because the non-traditional systems were developed relatively recently, the reference materials for these systems are more straightforward and less numerous than for the traditional isotopic systems. The following table contains the material defining the δ=0 for each isotopic scale, the 'best' measurement of the absolute isotopic fractions of an indicated material (which is often the same as the material defining the scale, but not always), the calculated absolute isotopic ratio, and links to lists of isotopic reference materials prepared by the Commission on Isotopic Abundances and Atomic Weight (part of the International Union of Pure and Applied Chemistry (IUPAC) ). A summary list of non-traditional stable isotope systems is available here , and much of this information is derived from Brand et al. (2014). [ 23 ] In addition to the isotope systems listed in Table 8, ongoing research is focused on measuring the isotopic composition of barium (Allmen et al., 2010; [ 26 ] Miyazaki et al., 2014; [ 27 ] Nan et al ., 2015 [ 28 ] ) and vanadium (Nielson et al. , 2011). [ 29 ] Specpure Alfa Aesar is an isotopically well-characterized vanadium solution (Nielson et al. , 2011). [ 29 ] Furthermore, fractionation during chemical processing can be problematic for certain isotopic analyses, such as measuring heavy isotope ratios following column chromatography. In these cases reference materials can be calibrated for particular chemical procedures. (material for δ = 0) (material for δ = 0) 'best' measurement) R (σ) Table 8 gives the material and isotopic ratio defining the δ = 0 scale for each of the indicated elements. In addition, Table 8 lists the material with the 'best' measurement as determined by Meija et al. (2016). "Material" gives chemical formula , "Type of ratio" is the isotopic ratio reported in "Isotope ratio", and "Citation" gives the article(s) reporting the isotopic abundances on which the isotope ratio is based. The isotopic ratios reflect the results from individual analyses of absolute mass fraction, reported in the cited studies, aggregated in Meija et al. (2016), [ 14 ] and manipulated to reach the reported ratios. Error was calculated as the square root of the sum of the squares of fractional reported errors. Clumped isotopes present a distinct set of challenges for isotopic reference materials. By convention the clumped isotope composition of CO 2 liberated from CaCO 3 (Δ 47 ) [ 57 ] [ 58 ] [ 59 ] and CH 4 (Δ 18 /Δ 13 CH3D /Δ 12 CH2D2 ) [ 60 ] [ 61 ] [ 62 ] are reported relative to a stochastic distribution of isotopes. That is, the ratio of a given isotopologue of a molecule with multiple isotopic substitutions against a reference isotopologue is reported normalized to that same abundance ratio where all isotopes are distributed randomly. In practice the chosen reference frame is almost always the isotopologue with no isotopic substitutions. This is 12 C 16 O 2 for carbon dioxide and 12 C 1 H 4 for methane . Standard isotopic reference materials are still required in clumped isotope analysis for measuring the bulk δ values of a sample, which are used to calculate the expected stochastic distribution and subsequently to infer clumped isotope temperatures. However, the clumped isotope composition of most samples are altered in the mass spectrometer during ionization , meaning that post-measurement data correction requires having measured materials of known clumped isotope composition. At a given temperature equilibrium thermodynamics predicts the distribution of isotopes among possible isotopologues, and these predictions can be calibrated experimentally. [ 63 ] To generate a standard of known clumped isotope composition, current practice is to internally equilibrate analyte gas at high temperatures in the presence of a metal catalyst and assume that it has the Δ value predicted by equilibrium calculations. [ 63 ] Developing isotopic reference materials specifically for clumped isotope analysis remains an ongoing goal of this rapidly developing field and was a major discussion topic during the 6th International Clumped Isotopes Workshop in 2017. It is possible that researchers in the future will measure clumped isotope ratios against internationally distributed reference materials, similar to the current method of measuring the bulk isotope composition of unknown samples. The certification of isotopic reference materials is relatively complex. Like most aspects of reporting isotopic compositions it reflects a combination of historical artifacts and modern institutions. As a result, the details surrounding the certification of isotopic reference materials varies by element and chemical compound. As a general guideline, the isotopic composition of primary and original calibration reference materials were used to define the isotopic scales and so have no associated uncertainty. Updated calibration materials are generally certified by IAEA and important reference materials for two-point isotopic scales (SLAP, LSVEC) were reached through interlaboratory comparison. The isotopic composition of additional reference materials are either established through individual analytical facilities or through interlaboratory comparisons but often lack an official IAEA certification. There are certified values for most of the materials listed in Table 1, about half of the materials listed in Tables 2–7, and few of the materials in Table 8. The agreed-upon isotopic composition of primary reference and the original calibration materials were generally not reached through interlaboratory comparison. In part this is simply because the original materials were used to the define the isotopic scales and so have no associated uncertainty. VSMOW serves as the primary reference and calibration material for the hydrogen isotope system and one of two possible scales for the oxygen isotope system, and was prepared by Harmon Craig . VSMOW2 is the replacement calibration standard and was calibrated by measurements at five selected laboratories. The isotopic composition of SLAP was reached through interlaboratory comparison. [ 19 ] NBS-19 is the original calibration material for the carbon isotope scale made by I. Friedman, J. R. O’Neil and G. Cebula [ 64 ] and is used to define the VPDB scale. IAEA-603 is the replacement calibration standard and was calibrated by measurements at three selected laboratories (GEOTOP-UQAM in Montreal , Canada ; USGS in Reston, USA ; MPI -BGC in Jena , Germany ). The isotopic composition of LSVEC was reached through interlaboratory comparison. [ 19 ] IAEA-S-1, the original calibration material for the sulfur isotope scale and still in use today, was prepared by B. W. Robinson. [ 2 ] IAEA issues official certificates of isotopic composition for most new calibration materials. The IAEA has certified isotopic values for VSMOW2/SLAP2 [ 65 ] and IAEA-603 [ 66 ] (the replacement for the NBS-19 CaCO 3 standard). However, the isotopic composition of most reference materials distributed by IAEA are established in the scientific literature. For example, IAEA distributes the N isotope reference materials USGS34 ( KNO 3 ) and USGS35 ( NaNO 3 ), produced by a group of scientists at the USGS and reported in Böhlke et al. (2003), [ 16 ] but has not certified the isotopic composition of these references. Moreover, the cited δ 15 N and δ 18 O values of these references were not reached through interlaboratory comparison. A second example is IAEA-SO-5, a BaSO 4 reference material produced by R. Krouse and S. Halas and described in Halas & Szaran (2001). [ 67 ] The value of this reference was reached through interlaboratory comparison but lacks IAEA certification. Other reference materials (LSVEV, IAEA-N3) were reached through interlaboratory comparison [ 2 ] and are described by the IAEA but the status of their certification is unclear. As of 2018 NIST does not provide certificates for the common stable isotope reference materials. As seen at this link [ 68 ] showing the light stable isotope references currently available from NIST , this category includes all of the isotopic references critical for isotopic measurement of hydrogen , carbon , oxygen , nitrogen , and sulfur . However, for most of these materials NIST does provide a report of investigation, which gives a reference value that is not certified (following the definitions of May et al. (2000)). [ 69 ] For the above examples of USGS34 and USGS35, NIST reports reference values [ 70 ] but has not certified the results of Böhlke et al. (2003). [ 16 ] Conversely, NIST has not provided a reference value for IAEA-SO-5. As seen at this link , [ 71 ] NIST does certify isotopic reference materials for non-traditional "heavy" isotopic systems including rubidium , nickel , strontium , gallium , and thallium , as well as several isotopic systems that would normally be characterized at "light" but non-traditional such as magnesium and chlorine . While the isotopic composition of several of these materials were certified in the mid-1960s, other materials were certified as recently as 2011 (for example, Boric Acid Isotopic Standard 951a ). Because many isotopic reference materials are defined relative to one another using the δ notation, there are few constraints on the absolute isotopic ratios of reference materials. For dual-inlet and continuous flow mass spectrometry uncertainty in the raw isotopic ratio is acceptable because samples are often measured through multi-collection and then compared directly with standards, with data in the published literature reported relative to the primary reference materials. In this case the actual measurement is of an isotope ratio and is rapidly converted to a ratio or ratios so the absolute isotope ratio is only minimally important for attaining high-accuracy measurements. However, the uncertainty in the raw isotopic ratio of reference materials is problematic for applications that do not directly measure mass-resolved ion beams. Measurements of isotope ratios through laser spectroscopy or nuclear magnetic resonance are sensitive to the absolute abundance of isotopes and uncertainty in the absolute isotopic ratio of a standard can limit measurement accuracy. It is possible that these techniques will ultimately be used to refine the isotope ratios of reference materials. Measuring isotopic ratios by mass spectrometry includes multiple steps in which samples can undergo cross-contamination , including during sample preparation, leakage of gas through instrument valves, the generic category of phenomena called 'memory effects', and the introduction of blanks (foreign analyte measured as part of the sample). [ 1 ] As a result of these instrument-specific effects the range in measured δ values can be lower than the true range in the original samples. To correct for such scale compression researchers calculate a "stretching factor" by measuring two isotopic reference materials (Coplen, 1988). [ 72 ] For the hydrogen system the two reference materials are commonly VSMOW2 and SLAP2, where δ 2 H VSMOW2 = 0 and δ 2 H SLAP2 = -427.5 vs. VSMOW . If the measured difference between the two references is less than 427.5‰, all measured 2 H/ 1 H ratios are multiplied by the stretching factor required to bring the difference between the two reference materials in line with expectations. After this scaling, a factor is added to all measured isotopic ratios so that the reference materials attain their defined isotopic values. [ 1 ] The carbon system also uses two anchoring reference materials (Coplen et al. , 2006a; 2006b). [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Reference_materials_for_stable_isotope_analysis
A reference model —in systems , enterprise , and software engineering —is an abstract framework or domain-specific ontology consisting of an interlinked set of clearly defined concepts produced by an expert or body of experts to encourage clear communication. A reference model can represent the component parts of any consistent idea, from business functions to system components, as long as it represents a complete set. This frame of reference can then be used to communicate ideas clearly among members of the same community. Reference models are often illustrated as a set of concepts with some indication of the relationships between the concepts. According to OASIS (Organization for the Advancement of Structured Information Standards) a reference model is "an abstract framework for understanding significant relationships among the entities of some environment, and for the development of consistent standards or specifications supporting that environment. A reference model is based on a small number of unifying concepts and may be used as a basis for education and explaining standards to a non-specialist. A reference model is not directly tied to any standards, technologies or other concrete implementation details, but it does seek to provide a common semantics that can be used unambiguously across and between different implementations." There are a number of concepts rolled up into that of a 'reference model.' Each of these concepts is important: There are many uses for a reference model. One use is to create standards for both the objects that inhabit the model and their relationships to one another. By creating standards, the work of engineers and developers who need to create objects that behave according to the standard is made easier. Software can be written that meets a standard. When done well, a standard can make use of design patterns that support key qualities of software, such as the ability to extend the software in an inexpensive way. Another use of a reference model is to educate. Using a reference model, leaders in software development can help break down a large problem space into smaller problems that can be understood, tackled, and refined. Developers who are new to a particular set of problems can quickly learn what the different problems are, and can focus on the problems that they are being asked to solve, while trusting that other areas are well understood and rigorously constructed. The level of trust is important to allow software developers to efficiently focus on their work. A third use of a reference model is to improve communication between people. A reference model breaks up a problem into entities, or "things that exist all by themselves." This is often an explicit recognition of concepts that many people already share, but when created in an explicit manner, a reference model is useful by defining how these concepts differ from, and relate to, one another. This improves communication between individuals involved in using these concepts. A fourth use of a reference model is to create clear roles and responsibilities. By creating a model of entities and their relationships, an organization can dedicate specific individuals or teams, making them responsible for solving a problem that concerns a specific set of entities. For example, if a reference model describes a set of business measurements needed to create a balanced scorecard , then each measurement can be assigned to a specific business leader. That allows a senior manager to hold each of their team members responsible for producing high quality results. A fifth use of a reference model is to allow the comparison of different things. By breaking up a problem space into basic concepts, a reference model can be used to examine two different solutions to that problem. In doing so, the component parts of a solution can be discussed in relation to one another. For example, if a reference model describes computer systems that help track contacts between a business and their customers, then a reference model can be used by a business to decide which of five different software products to purchase, based on their needs. A reference model, in this example, could be used to compare how well each of the candidate solutions can be configured to meet the needs of a particular business process. Instances of reference models include, among others:
https://en.wikipedia.org/wiki/Reference_model
A reference tone is a pure tone corresponding to a known frequency, and produced at a stable sound pressure level (volume), usually by specialized equipment. The most common reference tone in audio engineering is a 1000 Hz tone ⓘ at −20 dB . It is meant to be used by audio engineers in order to adjust the playback equipment so that the accompanying media is at a comfortable volume for the audience. In video production , this tone is usually accompanied by a test card so the video programming may be calibrated as well. It is sometimes played in sequence between a 100 Hz and 10 kHz tone to ensure an accurate response from the equipment at varying audio frequencies. This is also the "bleep" tone commonly used to censor obscene or sensitive audio content. Many electronic tuners used by musicians emit a tone of 440Hz , corresponding to a pitch of A above Middle C (A4). More sophisticated tuners offer a choice of other reference pitches to account for differences in tuning . Some specialized tuners offer pitches used commonly on a particular instrument (standard guitar tuning, fifth intervals for string instruments, the open tones for various brass instruments). In telecommunication , a standard test tone is a pure tone with a standardized level generally used for level alignment of single links and of links in tandem . [ 1 ] For standardized test signal levels and frequencies, see MIL-STD-188 -100 for United States Department of Defense (DOD) use, and the Code of Federal Regulations Title 47, part 68 for other Government agencies.
https://en.wikipedia.org/wiki/Reference_tone
The Refined Bitumen Association is the trade association for UK bitumen companies. It was formed in 1968. In 2000, it formed the Asphalt Industry Alliance with the Mineral Products Association , based in London. Asphalt is a mixture of bitumen and quarried mineral products, represented by both trade organisations. Its five main members cover 95% of the UK market It represents the UK bitumen industry at a national level. [ 2 ] The UK produces around 1.5 million tonnes of bitumen a year. 90% of UK bitumen is used on roads .
https://en.wikipedia.org/wiki/Refined_Bitumen_Association
In category theory and related fields of mathematics, a refinement is a construction that generalizes the operations of "interior enrichment", like bornologification or saturation of a locally convex space. A dual construction is called envelope . Suppose K {\displaystyle K} is a category, X {\displaystyle X} an object in K {\displaystyle K} , and Γ {\displaystyle \Gamma } and Φ {\displaystyle \Phi } two classes of morphisms in K {\displaystyle K} . The definition [ 1 ] of a refinement of X {\displaystyle X} in the class Γ {\displaystyle \Gamma } by means of the class Φ {\displaystyle \Phi } consists of two steps. Notations: In a special case when Γ {\displaystyle \Gamma } is a class of all morphisms whose ranges belong to a given class of objects L {\displaystyle L} in K {\displaystyle K} it is convenient to replace Γ {\displaystyle \Gamma } with L {\displaystyle L} in the notations (and in the terms): Similarly, if Φ {\displaystyle \Phi } is a class of all morphisms whose ranges belong to a given class of objects M {\displaystyle M} in K {\displaystyle K} it is convenient to replace Φ {\displaystyle \Phi } with M {\displaystyle M} in the notations (and in the terms): For example, one can speak about a refinement of X {\displaystyle X} in the class of objects L {\displaystyle L} by means of the class of objects M {\displaystyle M} :
https://en.wikipedia.org/wiki/Refinement_(category_theory)
Refinement is a generic term of computer science that encompasses various approaches for producing correct computer programs and simplifying existing programs to enable their formal verification. In formal methods , program refinement is the verifiable transformation of an abstract (high-level) formal specification into a concrete (low-level) executable program . [ citation needed ] Stepwise refinement allows this process to be done in stages. Logically, refinement normally involves implication , but there can be additional complications. The progressive just-in-time preparation of the product backlog (requirements list) in agile software development approaches, such as Scrum , is also commonly described as refinement. [ 1 ] Data refinement is used to convert an abstract data model (in terms of sets for example) into implementable data structures (such as arrays ). [ citation needed ] Operation refinement converts a specification of an operation on a system into an implementable program (e.g., a procedure ). The postcondition can be strengthened and/or the precondition weakened in this process. This reduces any nondeterminism in the specification, typically to a completely deterministic implementation. For example, x ∈ {1,2,3} (where x is the value of the variable x after an operation) could be refined to x ∈ {1,2}, then x ∈ {1}, and implemented as x := 1. Implementations of x := 2 and x := 3 would be equally acceptable in this case, using a different route for the refinement. However, we must be careful not to refine to x ∈ {} (equivalent to false ) since this is unimplementable; it is impossible to select a member from the empty set . The term reification is also sometimes used (coined by Cliff Jones ). Retrenchment is an alternative technique when formal refinement is not possible. The opposite of refinement is abstraction . Refinement calculus is a formal system (inspired from Hoare logic ) that promotes program refinement. The FermaT Transformation System is an industrial-strength implementation of refinement. The B-Method is also a formal method that extends refinement calculus with a component language: it has been used in industrial developments. In type theory , a refinement type [ 2 ] [ 3 ] [ 4 ] is a type endowed with a predicate which is assumed to hold for any element of the refined type. Refinement types can express preconditions when used as function arguments or postconditions when used as return types : for instance, the type of a function which accepts natural numbers and returns natural numbers greater than 5 may be written as f : N → { n : N | n > 5 } {\displaystyle f:\mathbb {N} \rightarrow \{n:\mathbb {N} \,|\,n>5\}} . Refinement types are thus related to behavioral subtyping . This software-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Refinement_(computing)
In mathematics , and more particularly in set theory , a cover (or covering ) [ 1 ] of a set X {\displaystyle X} is a family of subsets of X {\displaystyle X} whose union is all of X {\displaystyle X} . More formally, if C = { U α : α ∈ A } {\displaystyle C=\lbrace U_{\alpha }:\alpha \in A\rbrace } is an indexed family of subsets U α ⊂ X {\displaystyle U_{\alpha }\subset X} (indexed by the set A {\displaystyle A} ), then C {\displaystyle C} is a cover of X {\displaystyle X} if ⋃ α ∈ A U α = X . {\displaystyle \bigcup _{\alpha \in A}U_{\alpha }=X.} Thus the collection { U α : α ∈ A } {\displaystyle \lbrace U_{\alpha }:\alpha \in A\rbrace } is a cover of X {\displaystyle X} if each element of X {\displaystyle X} belongs to at least one of the subsets U α {\displaystyle U_{\alpha }} . Covers are commonly used in the context of topology . If the set X {\displaystyle X} is a topological space , then a cover C {\displaystyle C} of X {\displaystyle X} is a collection of subsets { U α } α ∈ A {\displaystyle \{U_{\alpha }\}_{\alpha \in A}} of X {\displaystyle X} whose union is the whole space X = ⋃ α ∈ A U α {\displaystyle X=\bigcup _{\alpha \in A}U_{\alpha }} . In this case C {\displaystyle C} is said to cover X {\displaystyle X} , or that the sets U α {\displaystyle U_{\alpha }} cover X {\displaystyle X} . [ 1 ] If Y {\displaystyle Y} is a (topological) subspace of X {\displaystyle X} , then a cover of Y {\displaystyle Y} is a collection of subsets C = { U α } α ∈ A {\displaystyle C=\{U_{\alpha }\}_{\alpha \in A}} of X {\displaystyle X} whose union contains Y {\displaystyle Y} . That is, C {\displaystyle C} is a cover of Y {\displaystyle Y} if Y ⊆ ⋃ α ∈ A U α . {\displaystyle Y\subseteq \bigcup _{\alpha \in A}U_{\alpha }.} Here, Y {\displaystyle Y} may be covered with either sets in Y {\displaystyle Y} itself or sets in the parent space X {\displaystyle X} . A cover of X {\displaystyle X} is said to be locally finite if every point of X {\displaystyle X} has a neighborhood that intersects only finitely many sets in the cover. Formally, C = { U α } {\displaystyle C=\{U_{\alpha }\}} is locally finite if, for any x ∈ X {\displaystyle x\in X} , there exists some neighborhood N ( x ) {\displaystyle N(x)} of x {\displaystyle x} such that the set { α ∈ A : U α ∩ N ( x ) ≠ ∅ } {\displaystyle \left\{\alpha \in A:U_{\alpha }\cap N(x)\neq \varnothing \right\}} is finite. A cover of X {\displaystyle X} is said to be point finite if every point of X {\displaystyle X} is contained in only finitely many sets in the cover. [ 1 ] A cover is point finite if locally finite, though the converse is not necessarily true. Let C {\displaystyle C} be a cover of a topological space X {\displaystyle X} . A subcover of C {\displaystyle C} is a subset of C {\displaystyle C} that still covers X {\displaystyle X} . The cover C {\displaystyle C} is said to be an open cover if each of its members is an open set . That is, each U α {\displaystyle U_{\alpha }} is contained in T {\displaystyle T} , where T {\displaystyle T} is the topology on X ). [ 1 ] A simple way to get a subcover is to omit the sets contained in another set in the cover. Consider specifically open covers. Let B {\displaystyle {\mathcal {B}}} be a topological basis of X {\displaystyle X} and O {\displaystyle {\mathcal {O}}} be an open cover of X {\displaystyle X} . First, take A = { A ∈ B : there exists U ∈ O such that A ⊆ U } {\displaystyle {\mathcal {A}}=\{A\in {\mathcal {B}}:{\text{ there exists }}U\in {\mathcal {O}}{\text{ such that }}A\subseteq U\}} . Then A {\displaystyle {\mathcal {A}}} is a refinement of O {\displaystyle {\mathcal {O}}} . Next, for each A ∈ A , {\displaystyle A\in {\mathcal {A}},} one may select a U A ∈ O {\displaystyle U_{A}\in {\mathcal {O}}} containing A {\displaystyle A} (requiring the axiom of choice). Then C = { U A ∈ O : A ∈ A } {\displaystyle {\mathcal {C}}=\{U_{A}\in {\mathcal {O}}:A\in {\mathcal {A}}\}} is a subcover of O . {\displaystyle {\mathcal {O}}.} Hence the cardinality of a subcover of an open cover can be as small as that of any topological basis. Hence, second countability implies space is Lindelöf . A refinement of a cover C {\displaystyle C} of a topological space X {\displaystyle X} is a new cover D {\displaystyle D} of X {\displaystyle X} such that every set in D {\displaystyle D} is contained in some set in C {\displaystyle C} . Formally, In other words, there is a refinement map ϕ : B → A {\displaystyle \phi :B\to A} satisfying V β ⊆ U ϕ ( β ) {\displaystyle V_{\beta }\subseteq U_{\phi (\beta )}} for every β ∈ B . {\displaystyle \beta \in B.} This map is used, for instance, in the Čech cohomology of X {\displaystyle X} . [ 2 ] Every subcover is also a refinement, but the opposite is not always true. A subcover is made from the sets that are in the cover, but omitting some of them; whereas a refinement is made from any sets that are subsets of the sets in the cover. The refinement relation on the set of covers of X {\displaystyle X} is transitive and reflexive , i.e. a Preorder . It is never asymmetric for X ≠ ∅ {\displaystyle X\neq \emptyset } . Generally speaking, a refinement of a given structure is another that in some sense contains it. Examples are to be found when partitioning an interval (one refinement of a 0 < a 1 < ⋯ < a n {\displaystyle a_{0}<a_{1}<\cdots <a_{n}} being a 0 < b 0 < a 1 < a 2 < ⋯ < a n − 1 < b 1 < a n {\displaystyle a_{0}<b_{0}<a_{1}<a_{2}<\cdots <a_{n-1}<b_{1}<a_{n}} ), considering topologies (the standard topology in Euclidean space being a refinement of the trivial topology ). When subdividing simplicial complexes (the first barycentric subdivision of a simplicial complex is a refinement), the situation is slightly different: every simplex in the finer complex is a face of some simplex in the coarser one, and both have equal underlying polyhedra. Yet another notion of refinement is that of star refinement . The language of covers is often used to define several topological properties related to compactness. A topological space X {\displaystyle X} is said to be: For some more variations see the above articles. A topological space X is said to be of covering dimension n if every open cover of X has a point-finite open refinement such that no point of X is included in more than n+ 1 sets in the refinement and if n is the minimum value for which this is true. [ 3 ] If no such minimal n exists, the space is said to be of infinite covering dimension.
https://en.wikipedia.org/wiki/Refinement_(topology)
A refinery is a production facility composed of a group of chemical engineering unit processes and unit operations refining certain materials or converting raw material into products of value. Different types of refineries are as follows: The image below is a schematic flow diagram of a typical oil refinery depicting various unit processes and the flow of intermediate products between the inlet crude oil feedstock and the final products. The diagram depicts only one of the hundreds of different configurations. It does not include any of the usual facilities providing utilities such as steam, cooling water, and electric power as well as storage tanks for crude oil feedstock and for intermediate products and end products. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The image below is a schematic block flow diagram of a typical natural gas processing plant. It shows various unit processes converting raw natural gas into gas pipelined to end users. The block flow diagram also shows how processing of the raw natural gas yields byproduct sulfur, byproduct ethane, and natural gas liquids (NGL) propane, butanes and natural gasoline (denoted as pentanes +). [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Sugar is generally produced from sugarcane or sugar beets . As the global production of sugar from sugarcane is at least twice the production from sugar beets, this section focuses on sugarcane. [ 12 ] Sugarcane is traditionally refined into sugar in two stages. In the first stage, raw sugar is produced by the milling of harvested sugarcane. In a sugar mill, sugarcane is washed, chopped, and shredded by revolving knives. The shredded cane is mixed with water and crushed. The juices (containing 10-15 percent sucrose ) are collected and mixed with lime to adjust pH to 7, prevent decay into glucose and fructose , and precipitate impurities. The lime and other suspended solids are settled out, and the clarified juice is concentrated in a multiple-effect evaporator to make a syrup with about 60 weight percent sucrose. The syrup is further concentrated under vacuum until it becomes supersaturated and is then seeded with crystalline sugar. Upon cooling, sugar crystallizes out of the syrup. Centrifuging then separates the sugar from the remaining liquid (molasses). Raw sugar has a yellow to brown color. Sugar is sometimes consumed locally at this stage but usually undergoes further purification. [ 13 ] Sulfur dioxide is bubbled through the cane juice subsequent to crystallization in a process known as "sulfitation". This process inhibits color forming reactions and stabilizes the sugar juices to produce "mill white" or "plantation white" sugar. The fibrous solids, called bagasse , remaining after the crushing of the shredded sugarcane are burned for fuel which helps a sugar mill to become self-sufficient in energy. Any excess bagasse can be used for animal feed, to produce paper, or burned to generate electricity for the local power grid. The second stage is often executed in heavy sugar-consuming regions such as North America , Europe , and Japan . In the second stage, white sugar is produced that is more than 99 percent pure sucrose . In such refineries, raw sugar is further purified by fractional crystallization .
https://en.wikipedia.org/wiki/Refinery
In metallurgy , refining consists of purifying an impure metal. It is to be distinguished from other processes such as smelting and calcining in that those two involve a chemical change to the raw material, whereas in refining the final material is chemically identical to the raw material. Refining thus increases the purity of the raw material via processing. [ clarification needed ] There are many processes including pyrometallurgical and hydrometallurgical techniques. One ancient process for extracting the silver from lead was cupellation . This process involved melting impure lead samples in a cupel, a small porous container designed for purification that would aid in the oxidation process, while being able to withstand the heat needed to melt these metals in a furnace. This reaction would oxidize the lead to litharge , along with any other impurities present, whereas the silver would not get oxidized. [ 1 ] In the 18th century, the process was carried on using a kind of reverberatory furnace , but differing from the usual kind in that air was blown over the surface of the molten lead from bellows or (in the 19th century) blowing cylinders. [ 2 ] The Pattinson process was patented by its inventor, Hugh Lee Pattinson , in 1833 who described it as, "An improved method for separating silver from lead" [ citation needed ] . It exploited the fact that in molten lead (containing traces of silver), the first metal to solidify out of the liquid is lead, leaving the remaining liquid richer in silver. Pattinson's equipment consisted a row of up to 13 iron pots, each heated from below. Some lead, naturally containing a small percentage of silver, was loaded into the central pot and melted. This was then allowed to cool. As the lead solidified it is removed using large, perforated iron ladles and moved to the next pot in one direction, and the remaining metal which was now richer in silver was then transferred to the next pot in the opposite direction. The process was repeated from one pot to the next, the lead accumulating in the pot at one end and metal enriched in silver in the pot at the other. [ 3 ] [ 4 ] The level of enrichment possible is limited by the lead-silver eutectic and typically the process stopped around 600 to 700 ounces per ton (approx. 2%), so further separation is carried out by cupellation. [ 5 ] The process was economic for lead containing at least 250 grams of silver per ton. [ 2 ] The Parkes process , patented in 1850 by Alexander Parkes , uses molten zinc . Zinc is not miscible with lead and when the two molten metals are mixed, the zinc separates and floats to the top with ~2% lead. However, silver dissolves more easily in zinc, so the upper layer of zinc carries a significant portion of the silver. The melt is then cooled until the zinc solidifies and the dross is skimmed off. The silver is then recovered by volatilizing the zinc. [ 2 ] The Parkes process largely replaced the Pattinson process, except where the lead contained insufficient silver. In such a case, the Pattinson process provided a method to enrich it in silver to about 40 to 60 ounces per ton, at which concentration it could be treated using the Parkes process. [ 6 ] The initial product of copper smelting was impure "blister" copper , which contained sulfur and oxygen. To remove these impurities, the blister copper was repeatedly melted and solidified, undergoing a cycle of oxidation and reduction. [ 7 ] In one of the previous melting stages, lead was added. Gold and silver preferentially dissolved in this, thus providing a means of recovering these precious metals. To produce purer copper suitable for making copper plates or hollow-ware , further melting processes were undertaken, using charcoal as fuel. The repeated application of such fire-refining processes was capable of producing copper that was 98.5-99.5% pure. [ citation needed ] The purest copper is obtained by an electrolytic process, undertaken using a slab of impure copper as the anode and a thin sheet of pure copper as the cathode . The electrolyte is an acidic solution of copper (II) sulfate. By passing electricity through the cell, copper is dissolved from the anode and deposited on the cathode. However, impurities either remain in solution or collect as an insoluble sludge. This process only became possible following the invention of the dynamo ; it was first used in South Wales in 1869. [ citation needed ] The product of the blast furnace is pig iron , which contains 4–5% carbon and usually some silicon . To produce a forgeable product, a further process was needed (usually described as fining , rather than refining) . From the 16th century, this was undertaken in a finery forge . At the end of the 18th century, this began to be replaced by puddling (in a puddling furnace ), which was in turn gradually superseded by the production of mild steel by the Bessemer process . [ 8 ] The term refining is used in a narrower context. Henry Cort 's original puddling process only worked where the raw material was white cast iron , rather than the grey pig iron that was the usual raw material for finery forges. To use grey pig iron , a preliminary refining process was necessary to remove silicon. The pig iron was melted in a running out furnace and then run out into a trough. This process oxidized the silicon to form a slag, which floated on the iron and was removed by lowering a dam at the end of the trough. The product of this process was a white metal, known as finers metal or refined iron . Precious metal refining is the separation of precious metals from noble-metalliferous materials. Examples of these materials include used catalysts , electronic assemblies , ores , or metal alloys . In order to isolate noble-metalliferous materials, pyrolysis and/or hydrolysis procedures are used. In pyrolysis, the noble-metalliferous products are released from the other materials by solidifying in a melt to become cinder and then poured off or oxidized . In hydrolysis, the noble-metalliferous products are dissolved either in aqua regia (consisting of hydrochloric acid and nitric acid ) or in a hydrochloric acid and chlorine gas in solution. Subsequently, certain metals can be precipitated or reduced directly with a salt, gas, organic, and/or nitro hydrate connection. Afterwards, they go through cleaning stages or are recrystallized . The precious metals are separated from the metal salt by calcination . The noble-metalliferous materials are hydrolyzed first and thermally prepared ( pyrolyzed ) thereafter. The processes are better yielding when using catalysts that may sometimes contain precious metals themselves. When using catalysts, the recycling product is removed in each case and driven several times through the cycle. [ citation needed ]
https://en.wikipedia.org/wiki/Refining_(metallurgy)