text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics , a real structure on a complex vector space is a way to decompose the complex vector space in the direct sum of two real vector spaces. The prototype of such a structure is the field of complex numbers itself, considered as a complex vector space over itself and with the conjugation map σ : C → C {\displaystyle \sigma :{\mathbb {C} }\to {\mathbb {C} }\,} , with σ ( z ) = z ¯ {\displaystyle \sigma (z)={\bar {z}}} , giving the "canonical" real structure on C {\displaystyle {\mathbb {C} }\,} , that is C = R ⊕ i R {\displaystyle {\mathbb {C} }={\mathbb {R} }\oplus i{\mathbb {R} }\,} .
The conjugation map is antilinear : σ ( λ z ) = λ ¯ σ ( z ) {\displaystyle \sigma (\lambda z)={\bar {\lambda }}\sigma (z)\,} and σ ( z 1 + z 2 ) = σ ( z 1 ) + σ ( z 2 ) {\displaystyle \sigma (z_{1}+z_{2})=\sigma (z_{1})+\sigma (z_{2})\,} .
A real structure on a complex vector space V is an antilinear involution σ : V → V {\displaystyle \sigma :V\to V} . A real structure defines a real subspace V R ⊂ V {\displaystyle V_{\mathbb {R} }\subset V} , its fixed locus, and the natural map
is an isomorphism. Conversely any vector space that is the complexification of a real vector space has a natural real structure.
One first notes that every complex space V has a realification obtained by taking the same vectors as in the original set and restricting the scalars to be real. If t ∈ V {\displaystyle t\in V\,} and t ≠ 0 {\displaystyle t\neq 0} then the vectors t {\displaystyle t\,} and i t {\displaystyle it\,} are linearly independent in the realification of V . Hence:
Naturally, one would wish to represent V as the direct sum of two real vector spaces, the "real and imaginary parts of V ". There is no canonical way of doing this: such a splitting is an additional real structure in V . It may be introduced as follows. [ 1 ] Let σ : V → V {\displaystyle \sigma :V\to V\,} be an antilinear map such that σ ∘ σ = i d V {\displaystyle \sigma \circ \sigma =id_{V}\,} , that is an antilinear involution of the complex space V .
Any vector v ∈ V {\displaystyle v\in V\,} can be written v = v + + v − {\displaystyle {v=v^{+}+v^{-}}\,} ,
where v + = 1 2 ( v + σ v ) {\displaystyle v^{+}={1 \over {2}}(v+\sigma v)} and v − = 1 2 ( v − σ v ) {\displaystyle v^{-}={1 \over {2}}(v-\sigma v)\,} .
Therefore, one gets a direct sum of vector spaces V = V + ⊕ V − {\displaystyle V=V^{+}\oplus V^{-}\,} where:
Both sets V + {\displaystyle V^{+}\,} and V − {\displaystyle V^{-}\,} are real vector spaces . The linear map K : V + → V − {\displaystyle K:V^{+}\to V^{-}\,} , where K ( t ) = i t {\displaystyle K(t)=it\,} , is an isomorphism of real vector spaces, whence:
The first factor V + {\displaystyle V^{+}\,} is also denoted by V R {\displaystyle V_{\mathbb {R} }\,} and is left invariant by σ {\displaystyle \sigma \,} , that is σ ( V R ) ⊂ V R {\displaystyle \sigma (V_{\mathbb {R} })\subset V_{\mathbb {R} }\,} . The second factor V − {\displaystyle V^{-}\,} is
usually denoted by i V R {\displaystyle iV_{\mathbb {R} }\,} . The direct sum V = V + ⊕ V − {\displaystyle V=V^{+}\oplus V^{-}\,} reads now as:
i.e. as the direct sum of the "real" V R {\displaystyle V_{\mathbb {R} }\,} and "imaginary" i V R {\displaystyle iV_{\mathbb {R} }\,} parts of V . This construction strongly depends on the choice of an antilinear involution of the complex vector space V . The complexification of the real vector space V R {\displaystyle V_{\mathbb {R} }\,} , i.e., V C = V R ⊗ R C {\displaystyle V^{\mathbb {C} }=V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} \,} admits
a natural real structure and hence is canonically isomorphic to the direct sum of two copies of V R {\displaystyle V_{\mathbb {R} }\,} :
It follows a natural linear isomorphism V R ⊗ R C → V {\displaystyle V_{\mathbb {R} }\otimes _{\mathbb {R} }\mathbb {C} \to V\,} between complex vector spaces with a given real structure.
A real structure on a complex vector space V , that is an antilinear involution σ : V → V {\displaystyle \sigma :V\to V\,} , may be equivalently described in terms of the linear map σ ^ : V → V ¯ {\displaystyle {\hat {\sigma }}:V\to {\bar {V}}\,} from the vector space V {\displaystyle V\,} to the complex conjugate vector space V ¯ {\displaystyle {\bar {V}}\,} defined by
For an algebraic variety defined over a subfield of the real numbers ,
the real structure is the complex conjugation acting on the points of the variety in complex projective or affine space.
Its fixed locus is the space of real points of the variety (which may be empty).
For a scheme defined over a subfield of the real numbers, complex conjugation
is in a natural way a member of the Galois group of the algebraic closure of the base field.
The real structure is the Galois action of this conjugation on the extension of the
scheme over the algebraic closure of the base field.
The real points are the points whose residue field is fixed (which may be empty).
In mathematics , a reality structure on a complex vector space V is a decomposition of V into two real subspaces, called the real and imaginary parts of V :
Here V R is a real subspace of V , i.e. a subspace of V considered as a vector space over the real numbers . If V has complex dimension n (real dimension 2 n ), then V R must have real dimension n .
The standard reality structure on the vector space C n {\displaystyle \mathbb {C} ^{n}} is the decomposition
In the presence of a reality structure, every vector in V has a real part and an imaginary part, each of which is a vector in V R :
In this case, the complex conjugate of a vector v is defined as follows:
This map v ↦ v ¯ {\displaystyle v\mapsto {\overline {v}}} is an antilinear involution , i.e.
Conversely, given an antilinear involution v ↦ c ( v ) {\displaystyle v\mapsto c(v)} on a complex vector space V , it is possible to define a reality structure on V as follows. Let
and define
Then
This is actually the decomposition of V as the eigenspaces of the real linear operator c . The eigenvalues of c are +1 and −1, with eigenspaces V R and i {\displaystyle i} V R , respectively. Typically, the operator c itself, rather than the eigenspace decomposition it entails, is referred to as the reality structure on V . | https://en.wikipedia.org/wiki/Reality_structure |
In mathematical logic , realizability is a collection of methods in proof theory used to study constructive proofs and extract additional information from them. [ 1 ] Formulas from a formal theory are "realized" by objects, known as "realizers", in a way that knowledge of the realizer gives knowledge about the truth of the formula. There are many variations of realizability; exactly which class of formulas is studied and which objects are realizers differ from one variation to another.
Realizability can be seen as a formalization of the Brouwer–Heyting–Kolmogorov (BHK) interpretation of intuitionistic logic . In realizability the notion of "proof" (which is left undefined in the BHK interpretation) is replaced with a formal notion of "realizer". Most variants of realizability begin with a theorem that any statement that is provable in the formal system being studied is realizable. The realizer, however, usually gives more information about the formula than a formal proof would directly provide.
Beyond giving insight into intuitionistic provability, realizability can be applied to prove the disjunction and existence properties for intuitionistic theories and to extract programs from proofs, as in proof mining . It is also related to topos theory via realizability topoi .
Kleene 's original version of realizability uses natural numbers as realizers for formulas in Heyting arithmetic . A few pieces of notation are required: first, an ordered pair ( n , m ) is treated as a single number using a fixed primitive recursive pairing function ; second, for each natural number n , φ n is the computable function with index n . The following clauses are used to define a relation " n realizes A " between natural numbers n and formulas A in the language of Heyting arithmetic, known as Kleene's 1945-realizability relation: [ 2 ]
With this definition, the following theorem is obtained: [ 3 ]
On the other hand, there are classical theorems (even propositional formula schemas) that are realized but which are not provable in HA, a fact first established by Rose. [ 4 ] So realizability does not exactly mirror intuitionistic reasoning.
Further analysis of the method can be used to prove that HA has the " disjunction and existence properties ": [ 5 ]
More such properties are obtained involving Harrop formulas .
Kreisel introduced modified realizability , which uses typed lambda calculus as the language of realizers. Modified realizability is one way to show that Markov's principle is not derivable in intuitionistic logic. On the contrary, it allows to constructively justify the principle of independence of premise :
Relative realizability [ 6 ] is an intuitionist analysis of computable or computably enumerable elements of data structures that are not necessarily computable, such as computable operations on all real numbers when reals can be only approximated on digital computer systems.
Classical realizability was introduced by Krivine [ 7 ] and extends realizability to classical logic. It furthermore realizes axioms of Zermelo–Fraenkel set theory . Understood as a generalization of Cohen ’s forcing , it was used to provide new models of set theory. [ 8 ]
Linear realizability extends realizability techniques to linear logic . The term was coined by Seiller [ 9 ] to encompass several constructions, such as geometry of interaction models, [ 10 ] ludics , [ 11 ] interaction graphs models. [ 12 ]
Realizability is one of the methods used in proof mining to extract concrete "programs" from seemingly non-constructive mathematical proofs. Program extraction using realizability is implemented in some proof assistants such as Coq . | https://en.wikipedia.org/wiki/Realizability |
In probability and statistics , a realization , observation , or observed value , of a random variable is the value that is actually observed (what actually happened). The random variable itself is the process dictating how the observation comes about. Statistical quantities computed from realizations without deploying a statistical model are often called " empirical ", as in empirical distribution function or empirical probability .
Conventionally, to avoid confusion, upper case letters denote random variables; the corresponding lower case letters denote their realizations. [ 1 ]
In more formal probability theory , a random variable is a function X defined from a sample space Ω to a measurable space called the state space . [ 2 ] [ a ] If an element in Ω is mapped to an element in state space by X , then that element in state space is a realization. Elements of the sample space can be thought of as all the different possibilities that could happen; while a realization (an element of the state space) can be thought of as the value X attains when one of the possibilities did happen. Probability is a mapping that assigns numbers between zero and one to certain subsets of the sample space, namely the measurable subsets, known here as events . Subsets of the sample space that contain only one element are called elementary events . The value of the random variable (that is, the function) X at a point ω ∈ Ω,
is called a realization of X . [ 3 ] | https://en.wikipedia.org/wiki/Realization_(probability) |
Realized niche width is a phrase relating to ecology , is defined by the actual space that an organism inhabits and the resources it can access as a result of limiting pressures from other species (e.g. superior competitors). An organism's ecological niche is determined by the biotic and abiotic factors that make up that specific ecosystem that allow that specific organism to survive there. The width of an organism's niche is set by the range of conditions a species is able to survive in that specific environment.
The fundamental niche width of an organism refers to the theoretical range of conditions that an organism could survive and reproduce in without considering interspecific interactions. The fundamental niche exclusively considers limiting biotic and abiotic factors such as appropriate food sources and a suitable climate . The fundamental niche width often differs from the realized niche width (the areas where actually inhabited by a given species). [ 1 ] This differentiation is due to interspecific competition with other species within their ecosystem while still considering the biotic and abiotic limiting factors. A species' realized niche is usually much narrower than its fundamental niche width as it is forced to adjust its niche around the superior competing species .
The physical area where a species lives, is its habitat . The set of environmental features essential to that species' survival, is its "niche." (Ecology. Begon, Harper, Townsend)
The difference between the realized and the fundamental niche is important in understanding how interactions with a variety of different species in one environment affects the fitness of another species. This is not only important in understanding how a species functions in an ecosystem, but it is also important in determining the potential and realized success of invasive species. Invasive species could thrive or be killed off in an environment where they would theoretically be able to exist based on the presence or lack of there of different species. [ 2 ] To survive, an invasive species first has to successfully survive the journey to the new area, they then have to be able to survive in that habitat. After this, they then must to be able to successfully compete and reproduce with the other species already in the new, invaded environment. Considering these factors, not all invasive species are devastating to the new environment they inhabit as they must first overcome these other challenges before they can negatively affect their new environment.
In an organism's niche, the abiotic and biotic factors determine the ability of a species to survive; however, both the abiotic and biotic factors of that environment can be changed by that species' existence. A species' impact on its biotic environment in its niche tend to effect not only that species' ability to survive, but the other species it coexists with. Again, these changes are important in understanding the effects of invasive species in a new habitat. The ability of a new species to change an environments abiotic and biotic factors can make a previously habitable environment for a species uninhabitable. The extinction of this species can further change the biotic factors of an environment. Invasive species not only directly affect the biotic environment, but they indirectly effect this environment by affecting the species able to survive in this habitat.
Niche theory states that a species' ranges are limited by their physiological tolerances (fundamental niche) and their biotic limitations (realized niche). The survival rates of organisms facing rapid niche shifts help scientists predict the future effects of climate change and invasive species on current ecological communities. The ability of organisms to shift niches also help scientists understand community formation and speciation. Niche shifts for invasive species in their native environment differ from those in their newly invaded environment. After an invasive species is introduced to their new environment, they have to cope with new biotic factors, environmental constraints, and climate differences. These variables play a role in determining how the organism's niche will evolve. Biophysical models use links between an organism's preferred climate and their functional traits to determine where an organism could survive without taking biotic factors into account. [ 3 ]
The phenomenon of fundamental and realized niches was documented by the ecologist Joseph Connell in his study of species overlap between barnacles on intertidal rocks . He observed that Chthamalus stellatus and Balanus balanoides inhabited the upper and lower strata of intertidal rocks respectively, but only Chthamalus barnacles could survive both the upper and lower strata without desiccation . The removal of Balanus barnacles from the lower strata, resulted in the Chthamalus barnacles occupying its fundamental niche (both upper and lower strata) which is much larger than its realized niche in the upper strata. [ 4 ]
This experiment was conducted on the rocky intertidal because of its accessibility and the large amount of previous research done on the species living there. Many of the species that live here are also sedentary or slow moving, making them easier to study. The different species are also more easily manipulated creating experimental and control groups that can be better studied because of their sedentary or slow moving state. The goal of Connell's experiment was to determine how much physical and biotic competition factors affected community structure in the rocky intertidal ecosystem. Vertical zonation also plays a role in determining the placement of different species in the rocky intertidal ecosystem which was previously thought to be due to the tides. [ 5 ]
A study by Tingley et al. focuses on the invasion of the cane toad ( Rhinella marina , formerly Bufo marinus ) of Australia. Through thermal acclimation and development of improved movement functions, this toad has expanded its habitat range significantly. Evidence in this study showed that there was a difference between the toad's native niche and its invaded environment niche. A review of 180 case studies showed only 50% of invasive species went through a niche shift; however, niche changes are determined in a variety of different ways making it hard to determine how accurate this study is.
It was also proven that the toad's increased range was only observed in Australia and not in its native environment even though the same physical conditions were present in both. This means that biotic factors and/or dispersal barriers limit the toad in its native environment. Without these constraints in its invaded environment, the toad is able to fill out its fundamental niche. Determining realized niches help with developing biotic control agents for invasive species, and determining an organism's fundamental niche help scientist's conclude how well a species would be able to survive and adapt to climate change. [ 3 ]
Another study by Truong et al. reviewed the use of plants as the realized niche for the human pathogen Listeria monocytogenes . This paper focuses on how this pathogen uses a plant as its realized niche. The fundamental niche of this pathogen can be determined through studies where the pathogen is grown aseptically (without other pathogens); however, abiotic and biotic factors limit the ability for this pathogen to exist in nature. This study was not able to clearly determine how this pathogen and plants survive together. However, it was shown that the plants did not defend itself against the presence of this pathogen. This study did support the theory that this pathogen can use plant nutrients to survive and multiply if the plants environment and competition allows. However, more comprehensive research will need to be conducted to determine this pathogen's realized niche. This study further shows how determining an organism's realized niche can help understand this human pathogen's natural history. [ 6 ] | https://en.wikipedia.org/wiki/Realized_niche_width |
Realizing Increased Photosynthetic Efficiency (RIPE) is a translational research project that is genetically engineering plants to photosynthesize more efficiently to increase crop yields. [ 1 ] RIPE aims to increase agricultural production worldwide, particularly to help reduce hunger and poverty in Sub-Saharan Africa and Southeast Asia by sustainably improving the yield of key food crops including soybeans , rice , cassava [ 2 ] and cowpeas . [ 3 ] The RIPE project began in 2012, funded by a five-year, $25-million dollar grant from the Bill and Melinda Gates Foundation . [ 4 ] In 2017, the project received a $45 million-dollar reinvestment from the Gates Foundation, Foundation for Food and Agriculture Research, and the UK Government's Department for International Development . [ 5 ] In 2018, the Gates Foundation contributed an additional $13 million to accelerate the project's progress. [ 6 ]
During the 20th century, the Green Revolution dramatically increased yields through advances in plant breeding and land management . [ 7 ] This period of agricultural innovation is credited for saving millions of lives. [ 8 ] However, these approaches are reaching their biological limits, leading to stagnation in yield improvement. In 2009, the Food and Agriculture Organization projected that global food production must increase by 70% by 2050 to feed an estimated world population of 9 billion people. [ 9 ] Meeting the demands of 2050 is further challenged by shrinking arable land , decreasing natural resources , and climate change . [ 10 ]
The RIPE project's proof-of-concept study established photosynthesis can be improved to increase yields, [ 11 ] published in Science . [ 12 ] The Guardian named this discovery one of the 12 key science moments of 2016. [ 13 ] Computer model simulations identify strategies to improve the basic underlying mechanisms of photosynthesis and increase yield. [ 14 ] First, researchers transform, or genetically engineer, model plants that are tested in controlled environments, e.g. growth chambers and greenhouses. Next, successful transformations are tested in randomized, replicated field trials. Finally, transformations with statistically significant yield increases are translated to the project's target food crops. [ 15 ] Likely several approaches could be combined to additively increase yield. "Global access” ensures smallholder farmers will be able to use and afford the project's intellectual property. [ 16 ]
RIPE is led by the University of Illinois at the Carl R. Woese Institute for Genomic Biology . The project's partner institutions include the Australian National University , Chinese Academy of Sciences , Commonwealth Scientific and Industrial Research Organisation , Lancaster University , Louisiana State University , University of California at Berkeley , University of Cambridge , University of Essex , and the United States Department of Agriculture / Agricultural Research Service .
The Executive Committee oversees the various research strategies; its members are listed in the table below. | https://en.wikipedia.org/wiki/Realizing_Increased_Photosynthetic_Efficiency |
A rear-end collision , often called rear-ending or, in the UK, a shunt , occurs when a forward-moving vehicle crashes into the back of another vehicle (often stationary) in front of it. Similarly, rear-end rail collisions occur when a train runs into the end of a preceding train on the same track . [ 1 ] Common factors contributing to rear-end collisions include driver inattention or distraction, tailgating , panic stops, brake checking and reduced traction due to wet weather or worn pavement .
According to the National Highway Safety Administration (NHTSA), rear-end collisions account for 7.5% of fatal automobile collisions. However, they account for 29% of all automobile accidents , making them one of the most frequent types of automobile accidents in the United States . [ 2 ]
According to NHTSA in 2020, out of 419,400 people involved in rear-end crashes, less than 1% were killed and over 99% were injured. [ 3 ]
Typical scenarios for rear-ends are a sudden deceleration by the first car (for example, to avoid someone crossing the street) so that the driver behind it does not have time to brake and collides with it. Alternatively, the following car may accelerate more rapidly than the leading one (for example, leaving an intersection), resulting in a collision.
Generally, if two vehicles have similar physical structures, crashing into another car is equivalent to crashing into a rigid, immovable surface (like a wall) at half of the closing speed. This means that rear-ending a stationary car while travelling at 50 km/h (30 mph) is equivalent, in terms of deceleration, to crashing into a wall at 25 km/h (15 mph). The same is true for the vehicle crashed into. However, if one of the vehicles is significantly more rigid (e.g. a small car hits the rear of a heavy truck) then the deceleration is more typically reflected by the full closing speed for the less rigid vehicle.
A typical medical consequence of rear-ends, even in collisions at moderate speed, is whiplash . In more severe cases, permanent injuries such as herniation may occur. The rearmost passengers in minivans , benefiting little from the short rear crumple zone , are more likely to be injured or killed. [ 4 ]
For purposes of insurance and policing , the driver of the car that rear-ends the other car is almost always considered at fault due to following too closely, or lack of attention. An exception is if the rear-ended vehicle is in reverse gear. If the driver of the car that was rear-ended files a claim against the driver who hit them, the second driver could be responsible for all damages to the other driver's car. According to data from the NHTSA , the percentage of rear-end accidents in all crashes is 23–30%. [ 5 ]
The Ford Pinto received widespread concern when it was alleged that a design flaw could cause fuel-fed fires in rear-end collisions. [ 6 ]
Recent developments in automated safety systems have reduced the number of rear-end collisions. [ 7 ] [ 8 ] [ 5 ] | https://en.wikipedia.org/wiki/Rear-end_collision |
The Rear Admiral William S. Parsons Award for Scientific and Technical Progress is awarded each year by the Navy League of the United States to a Navy or Marine Corps officer, enlisted person or civilian, who has made an outstanding contribution in any field of science that has furthered the development and progress of the US Navy or Marine Corps. The award is named for Admiral William Sterling Parsons . The award is presented with a certificate and a watch along with other Professional Excellence Awards (Sea Service Awards) at the National Convention of the Navy League of the United States.
The award is described by the Navy League of the United States as:
"The Rear Admiral William S. Parsons Award is named for Admiral Parsons in recognition of his dedication to all aspects of scientific and technical advances and who was responsible to a marked degree for ensuring that the U.S. Navy remained in operational consonance with the ever-shifting and increasing demands of the changing world.
Presented since 1957, this award for scientific and technical progress is awarded to a Navy or Marine Corps officer, enlisted person or civilian who has made an outstanding contribution in any field of science that has furthered the development and progress of the Navy or Marine
Corps." [ 1 ]
The following is the list of recipients: [ a ] | https://en.wikipedia.org/wiki/Rear_Admiral_William_S._Parsons_Award |
In mathematics , the rearrangement inequality [ 1 ] states that for every choice of real numbers x 1 ≤ ⋯ ≤ x n and y 1 ≤ ⋯ ≤ y n {\displaystyle x_{1}\leq \cdots \leq x_{n}\quad {\text{ and }}\quad y_{1}\leq \cdots \leq y_{n}} and every permutation σ {\displaystyle \sigma } of the numbers 1 , 2 , … n {\displaystyle 1,2,\ldots n} we have
Informally, this means that in these types of sums, the largest sum is achieved by pairing large x {\displaystyle x} values with large y {\displaystyle y} values, and the smallest sum is achieved by pairing small values with large values. This can be formalised in the case that the x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are distinct, meaning that x 1 < ⋯ < x n , {\displaystyle x_{1}<\cdots <x_{n},} then:
Note that the rearrangement inequality ( 1 ) makes no assumptions on the signs of the real numbers, unlike inequalities such as the arithmetic-geometric mean inequality .
Many important inequalities can be proved by the rearrangement inequality, such as the arithmetic mean – geometric mean inequality , the Cauchy–Schwarz inequality , and Chebyshev's sum inequality .
As a simple example, consider real numbers x 1 ≤ ⋯ ≤ x n {\displaystyle x_{1}\leq \cdots \leq x_{n}} : By applying ( 1 ) with y i := x i {\displaystyle y_{i}:=x_{i}} for all i = 1 , … , n , {\displaystyle i=1,\ldots ,n,} it follows that x 1 x n + ⋯ + x n x 1 ≤ x 1 x σ ( 1 ) + ⋯ + x n x σ ( n ) ≤ x 1 2 + ⋯ + x n 2 {\displaystyle x_{1}x_{n}+\cdots +x_{n}x_{1}\leq x_{1}x_{\sigma (1)}+\cdots +x_{n}x_{\sigma (n)}\leq x_{1}^{2}+\cdots +x_{n}^{2}} for every permutation σ {\displaystyle \sigma } of 1 , … , n . {\displaystyle 1,\ldots ,n.}
The rearrangement inequality can be regarded as intuitive in the following way. Imagine there is a heap of $10 bills, a heap of $20 bills and one more heap of $100 bills. You are allowed to take 7 bills from a heap of your choice and then the heap disappears. In the second round you are allowed to take 5 bills from another heap and the heap disappears. In the last round you may take 3 bills from the last heap. In what order do you want to choose the heaps to maximize your profit? Obviously, the best you can do is to gain 7 ⋅ 100 + 5 ⋅ 20 + 3 ⋅ 10 {\displaystyle 7\cdot 100+5\cdot 20+3\cdot 10} dollars. This is exactly what the upper bound of the rearrangement inequality ( 1 ) says for the sequences 3 < 5 < 7 {\displaystyle 3<5<7} and 10 < 20 < 100. {\displaystyle 10<20<100.} In this sense, it can be considered as an example of a greedy algorithm .
Assume that 0 < x 1 < ⋯ < x n {\displaystyle 0<x_{1}<\cdots <x_{n}} and 0 < y 1 < ⋯ < y n . {\displaystyle 0<y_{1}<\cdots <y_{n}.} Consider a rectangle of width x 1 + ⋯ + x n {\displaystyle x_{1}+\cdots +x_{n}} and height y 1 + ⋯ + y n , {\displaystyle y_{1}+\cdots +y_{n},} subdivided into n {\displaystyle n} columns of widths x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} and the same number of rows of heights y 1 , … , y n , {\displaystyle y_{1},\ldots ,y_{n},} so there are n 2 {\displaystyle \textstyle n^{2}} small rectangles. You are supposed to take n {\displaystyle n} of these, one from each column and one from each row. The rearrangement inequality ( 1 ) says that you optimize the total area of your selection by taking the rectangles on the diagonal or the antidiagonal.
The lower bound and the corresponding discussion of equality follow by applying the results for the upper bound to − y n ≤ ⋯ ≤ − y 1 {\textstyle -y_{n}\leq \cdots \leq -y_{1}} , that is, let υ {\displaystyle \upsilon } be any permutation of the numbers 1 , 2 , … n {\textstyle 1,2,\ldots n} we have x 1 ( − y υ ( 1 ) ) + ⋯ + x n ( − y υ ( n ) ) ≤ x 1 ( − y n ) + ⋯ + x n ( − y 1 ) {\textstyle x_{1}(-y_{\upsilon (1)})+\cdots +x_{n}(-y_{\upsilon (n)})\leq x_{1}(-y_{n})+\cdots +x_{n}(-y_{1})} . Then, x 1 y n + ⋯ + x n y 1 ≤ x 1 y υ ( 1 ) + ⋯ + x n y υ ( n ) {\textstyle x_{1}y_{n}+\cdots +x_{n}y_{1}\leq x_{1}y_{\upsilon (1)}+\cdots +x_{n}y_{\upsilon (n)}} for every permutation υ {\displaystyle \upsilon } of 1 , … , n {\displaystyle 1,\dots ,n} .
Therefore, it suffices to prove the upper bound in ( 1 ) and discuss when equality holds.
Since there are only finitely many permutations of 1 , … , n , {\displaystyle 1,\ldots ,n,} there exists at least one σ {\displaystyle \sigma } for which the middle term in ( 1 ) x 1 y σ ( 1 ) + ⋯ + x n y σ ( n ) {\displaystyle x_{1}y_{\sigma (1)}+\cdots +x_{n}y_{\sigma (n)}} is maximal. In case there are several permutations with this property, let σ denote one with the highest number of integers i {\displaystyle i} from { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} satisfying y i = y σ ( i ) . {\displaystyle y_{i}=y_{\sigma (i)}.}
We will now prove by contradiction , that σ {\displaystyle \sigma } has to keep the order of y 1 , … , y n {\displaystyle y_{1},\ldots ,y_{n}} (then we are done with the upper bound in ( 1 ), because the identity has that property). Assume that there exists a j ∈ { 1 , … , n − 1 } {\displaystyle j\in \{1,\ldots ,n-1\}} such that y i = y σ ( i ) {\displaystyle y_{i}=y_{\sigma (i)}} for all i ∈ { 1 , … , j − 1 } {\displaystyle i\in \{1,\ldots ,j-1\}} and y j ≠ y σ ( j ) . {\displaystyle y_{j}\neq y_{\sigma (j)}.} Hence y j < y σ ( j ) {\displaystyle y_{j}<y_{\sigma (j)}} and there has to exist a k ∈ { j + 1 , … , n } {\displaystyle k\in \{j+1,\ldots ,n\}} with y j = y σ ( k ) {\displaystyle y_{j}=y_{\sigma (k)}} to fill the gap. Therefore,
which implies that
Expanding this product and rearranging gives
which is equivalent to ( 3 ). Hence the permutation τ ( i ) := { σ ( i ) for i ∈ { 1 , … , n } ∖ { j , k } , σ ( k ) for i = j , σ ( j ) for i = k , {\displaystyle \tau (i):={\begin{cases}\sigma (i)&{\text{for }}i\in \{1,\ldots ,n\}\setminus \{j,k\},\\\sigma (k)&{\text{for }}i=j,\\\sigma (j)&{\text{for }}i=k,\end{cases}}} which arises from σ {\displaystyle \sigma } by exchanging the values σ ( j ) {\displaystyle \sigma (j)} and σ ( k ) , {\displaystyle \sigma (k),} has at least one additional point which keeps the order compared to σ , {\displaystyle \sigma ,} namely at j {\displaystyle j} satisfying y j = y τ ( j ) , {\displaystyle y_{j}=y_{\tau (j)},} and also attains the maximum in ( 1 ) due to ( 4 ). This contradicts the choice of σ . {\displaystyle \sigma .}
If x 1 < ⋯ < x n , {\displaystyle x_{1}<\cdots <x_{n},} then we have strict inequalities in ( 2 ), ( 3 ), and ( 4 ), hence the maximum can only be attained by permutations keeping the order of y 1 ≤ ⋯ ≤ y n , {\displaystyle y_{1}\leq \cdots \leq y_{n},} and every other permutation σ {\displaystyle \sigma } cannot be optimal.
As above, it suffices to treat the upper bound in ( 1 ). For a proof by mathematical induction , we start with n = 2. {\displaystyle n=2.} Observe that x 1 ≤ x 2 and y 1 ≤ y 2 {\displaystyle x_{1}\leq x_{2}\quad {\text{ and }}\quad y_{1}\leq y_{2}} implies that
which is equivalent to
hence the upper bound in ( 1 ) is true for n = 2. {\displaystyle n=2.} If x 1 < x 2 , {\displaystyle x_{1}<x_{2},} then we get strict inequality in ( 5 ) and ( 6 ) if and only if y 1 < y 2 . {\displaystyle y_{1}<y_{2}.} Hence only the identity, which is the only permutation here keeping the order of y 1 < y 2 , {\displaystyle y_{1}<y_{2},} gives the maximum.
As an induction hypothesis assume that the upper bound in the rearrangement inequality ( 1 ) is true for n − 1 {\displaystyle n-1} with n ≥ 3 {\displaystyle n\geq 3} and that in the case x 1 < ⋯ < x n − 1 {\displaystyle x_{1}<\cdots <x_{n-1}} there is equality only when the permutation σ {\displaystyle \sigma } of 1 , … , n − 1 {\displaystyle 1,\ldots ,n-1} keeps the order of y 1 , … , y n − 1 . {\displaystyle y_{1},\ldots ,y_{n-1}.}
Consider now x 1 ≤ ⋯ ≤ x n {\displaystyle x_{1}\leq \cdots \leq x_{n}} and y 1 ≤ ⋯ ≤ y n . {\displaystyle y_{1}\leq \cdots \leq y_{n}.} Take a σ {\displaystyle \sigma } from the finite number of permutations of 1 , … , n {\displaystyle 1,\ldots ,n} such that the rearrangement in the middle of ( 1 ) gives the maximal result. There are two cases:
A straightforward generalization takes into account more sequences. Assume we have finite ordered sequences of nonnegative real numbers 0 ≤ x 1 ≤ ⋯ ≤ x n and 0 ≤ y 1 ≤ ⋯ ≤ y n and 0 ≤ z 1 ≤ ⋯ ≤ z n {\displaystyle 0\leq x_{1}\leq \cdots \leq x_{n}\quad {\text{and}}\quad 0\leq y_{1}\leq \cdots \leq y_{n}\quad {\text{and}}\quad 0\leq z_{1}\leq \cdots \leq z_{n}} and a permutation y σ ( 1 ) , … , y σ ( n ) {\displaystyle y_{\sigma (1)},\ldots ,y_{\sigma (n)}} of y 1 , … , y n {\displaystyle y_{1},\dots ,y_{n}} and another permutation z τ ( 1 ) , … , z τ ( n ) {\displaystyle z_{\tau (1)},\dots ,z_{\tau (n)}} of z 1 , … , z n . {\displaystyle z_{1},\dots ,z_{n}.} Then x 1 y σ ( 1 ) z τ ( 1 ) + ⋯ + x n y σ ( n ) z τ ( n ) ≤ x 1 y 1 z 1 + ⋯ + x n y n z n . {\displaystyle x_{1}y_{\sigma (1)}z_{\tau (1)}+\cdots +x_{n}y_{\sigma (n)}z_{\tau (n)}\leq x_{1}y_{1}z_{1}+\cdots +x_{n}y_{n}z_{n}.}
Note that, unlike the standard rearrangement inequality ( 1 ), this statement requires the numbers to be nonnegative. A similar statement is true for any number of sequences with all numbers nonnegative.
Another generalization of the rearrangement inequality states that for all real numbers x 1 ≤ ⋯ ≤ x n {\displaystyle x_{1}\leq \cdots \leq x_{n}} and every choice of continuously differentiable functions f i : [ x 1 , x n ] → R {\displaystyle f_{i}:[x_{1},x_{n}]\to \mathbb {R} } for i = 1 , 2 , … , n {\displaystyle i=1,2,\ldots ,n} such that their derivatives f 1 ′ , … , f n ′ {\displaystyle f'_{1},\ldots ,f'_{n}} satisfy f 1 ′ ( x ) ≤ f 2 ′ ( x ) ≤ ⋯ ≤ f n ′ ( x ) for all x ∈ [ x 1 , x n ] , {\displaystyle f'_{1}(x)\leq f'_{2}(x)\leq \cdots \leq f'_{n}(x)\quad {\text{ for all }}x\in [x_{1},x_{n}],} the inequality ∑ i = 1 n f n − i + 1 ( x i ) ≤ ∑ i = 1 n f σ ( i ) ( x i ) ≤ ∑ i = 1 n f i ( x i ) {\displaystyle \sum _{i=1}^{n}f_{n-i+1}(x_{i})\leq \sum _{i=1}^{n}f_{\sigma (i)}(x_{i})\leq \sum _{i=1}^{n}f_{i}(x_{i})} holds for every permutation f σ ( 1 ) , … , f σ ( n ) {\displaystyle f_{\sigma (1)},\ldots ,f_{\sigma (n)}} of f 1 , … , f n . {\displaystyle f_{1},\ldots ,f_{n}.} [ 2 ] Taking real numbers y 1 ≤ ⋯ ≤ y n {\displaystyle y_{1}\leq \cdots \leq y_{n}} and the linear functions f i ( x ) := x y i {\displaystyle f_{i}(x):=xy_{i}} for real x {\displaystyle x} and i = 1 , … , n , {\displaystyle i=1,\ldots ,n,} the standard rearrangement inequality ( 1 ) is recovered. | https://en.wikipedia.org/wiki/Rearrangement_inequality |
Reason is the capacity of consciously applying logic by drawing valid conclusions from new or existing information , with the aim of seeking the truth . [ 1 ] It is associated with such characteristically human activities as philosophy , religion , science , language , mathematics , and art , and is normally considered to be a distinguishing ability possessed by humans . [ 2 ] [ 3 ] Reason is sometimes referred to as rationality . [ 4 ]
Reasoning involves using more-or-less rational processes of thinking and cognition to extrapolate from one's existing knowledge to generate new knowledge, and involves the use of one's intellect . The field of logic studies the ways in which humans can use formal reasoning to produce logically valid arguments and true conclusions. [ 5 ] Reasoning may be subdivided into forms of logical reasoning , such as deductive reasoning , inductive reasoning , and abductive reasoning .
Aristotle drew a distinction between logical discursive reasoning (reason proper), and intuitive reasoning , [ 6 ] : VI.7 in which the reasoning process through intuition—however valid—may tend toward the personal and the subjectively opaque. In some social and political settings logical and intuitive modes of reasoning may clash, while in other contexts intuition and formal reason are seen as complementary rather than adversarial. For example, in mathematics , intuition is often necessary for the creative processes involved with arriving at a formal proof , arguably the most difficult of formal reasoning tasks.
Reasoning, like habit or intuition , is one of the ways by which thinking moves from one idea to a related idea. For example, reasoning is the means by which rational individuals understand the significance of sensory information from their environments, or conceptualize abstract dichotomies such as cause and effect , truth and falsehood , or good and evil . Reasoning, as a part of executive decision making , is also closely identified with the ability to self-consciously change, in terms of goals , beliefs , attitudes , traditions , and institutions , and therefore with the capacity for freedom and self-determination . [ 7 ]
Psychologists and cognitive scientists have attempted to study and explain how people reason , e.g. which cognitive and neural processes are engaged, and how cultural factors affect the inferences that people draw. The field of automated reasoning studies how reasoning may or may not be modeled computationally. Animal psychology considers the question of whether animals other than humans can reason.
In the English language and other modern European languages , "reason", and related words, represent words which have always been used to translate Latin and classical Greek terms in their philosophical sense.
The earliest major philosophers to publish in English, such as Francis Bacon , Thomas Hobbes , and John Locke also routinely wrote in Latin and French, and compared their terms to Greek, treating the words " logos ", " ratio ", " raison " and "reason" as interchangeable. The meaning of the word "reason" in senses such as "human reason" also overlaps to a large extent with " rationality " and the adjective of "reason" in philosophical contexts is normally " rational ", rather than "reasoned" or "reasonable". [ 11 ] Some philosophers, Hobbes for example, also used the word ratiocination as a synonym for "reasoning".
In contrast to the use of "reason" as an abstract noun , a reason is a consideration that either explains or justifies events, phenomena, or behavior . [ 10 ] Reasons justify decisions, reasons support explanations of natural phenomena, and reasons can be given to explain the actions (conduct) of individuals.
The words are connected in this way: using reason, or reasoning, means providing good reasons. For example, when evaluating a moral decision, "morality is, at the very least, the effort to guide one's conduct by reason —that is, doing what there are the best reasons for doing—while giving equal [and impartial] weight to the interests of all those affected by what one does." [ 12 ]
The proposal that reason gives humanity a special position in nature has been argued [ citation needed ] to be a defining characteristic of western philosophy and later western science , starting with classical Greece. Philosophy can be described as a way of life based upon reason, while reason has been among the major subjects of philosophical discussion since ancient times. Reason is often said to be reflexive , or "self-correcting", and the critique of reason has been a persistent theme in philosophy. [ 13 ]
For many classical philosophers , nature was understood teleologically , meaning that every type of thing had a definitive purpose that fit within a natural order that was itself understood to have aims. Perhaps starting with Pythagoras or Heraclitus , the cosmos was even said to have reason. [ 14 ] Reason, by this account, is not just a characteristic that people happen to have. Reason was considered of higher stature than other characteristics of human nature, because it is something people share with nature itself, linking an apparently immortal part of the human mind with the divine order of the cosmos. Within the human mind or soul ( psyche ), reason was described by Plato as being the natural monarch which should rule over the other parts, such as spiritedness ( thumos ) and the passions. Aristotle , Plato's student, defined human beings as rational animals , emphasizing reason as a characteristic of human nature . He described the highest human happiness or well being ( eudaimonia ) as a life which is lived consistently, excellently, and completely in accordance with reason. [ 6 ] : I
The conclusions to be drawn from the discussions of Aristotle and Plato on this matter are amongst the most debated in the history of philosophy. [ 15 ] But teleological accounts such as Aristotle's were highly influential for those who attempt to explain reason in a way that is consistent with monotheism and the immortality and divinity of the human soul. For example, in the neoplatonist account of Plotinus , the cosmos has one soul, which is the seat of all reason, and the souls of all people are part of this soul. Reason is for Plotinus both the provider of form to material things, and the light which brings people's souls back into line with their source. [ 16 ]
The classical view of reason, like many important Neoplatonic and Stoic ideas, was readily adopted by the early Church [ 17 ] as the Church Fathers saw Greek Philosophy as an indispensable instrument given to mankind so that we may understand revelation. [ 18 ] [ verification needed ] For example, the greatest among the early Church Fathers and Doctors of the Church such as Augustine of Hippo , Basil of Caesarea , and Gregory of Nyssa were as much Neoplatonic philosophers as they were Christian theologians, and they adopted the Neoplatonic view of human reason and its implications for our relationship to creation, to ourselves, and to God.
The Neoplatonic conception of the rational aspect of the human soul was widely adopted by medieval Islamic philosophers and continues to hold significance in Iranian philosophy . [ 15 ] As European intellectual life reemerged from the Dark Ages , the Christian Patristic tradition and the influence of esteemed Islamic scholars like Averroes and Avicenna contributed to the development of the Scholastic view of reason, which laid the foundation for our modern understanding of this concept. [ 19 ]
Among the Scholastics who relied on the classical concept of reason for the development of their doctrines, none were more influential than Saint Thomas Aquinas , who put this concept at the heart of his Natural Law . In this doctrine, Thomas concludes that because humans have reason and because reason is a spark of the divine, every single human life is invaluable, all humans are equal, and every human is born with an intrinsic and permanent set of basic rights. [ 20 ] On this foundation, the idea of human rights would later be constructed by Spanish theologians at the School of Salamanca .
Other Scholastics, such as Roger Bacon and Albertus Magnus , following the example of Islamic scholars such as Alhazen , emphasised reason an intrinsic human ability to decode the created order and the structures that underlie our experienced physical reality. This interpretation of reason was instrumental to the development of the scientific method in the early Universities of the high Middle Ages. [ 21 ]
The early modern era was marked by a number of significant changes in the understanding of reason, starting in Europe . One of the most important of these changes involved a change in the metaphysical understanding of human beings. Scientists and philosophers began to question the teleological understanding of the world. [ 22 ] Nature was no longer assumed to be human-like, with its own aims or reason, and human nature was no longer assumed to work according to anything other than the same " laws of nature " which affect inanimate things. This new understanding eventually displaced the previous world view that derived from a spiritual understanding of the universe.
Accordingly, in the 17th century, René Descartes explicitly rejected the traditional notion of humans as "rational animals", suggesting instead that they are nothing more than "thinking things" along the lines of other "things" in nature. Any grounds of knowledge outside that understanding was, therefore, subject to doubt.
In his search for a foundation of all possible knowledge, Descartes decided to throw into doubt all knowledge— except that of the mind itself in the process of thinking:
At this time I admit nothing that is not necessarily true. I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason—words of whose meanings I was previously ignorant. [ 23 ]
This eventually became known as epistemological or "subject-centred" reason, because it is based on the knowing subject , who perceives the rest of the world and itself as a set of objects to be studied, and successfully mastered, by applying the knowledge accumulated through such study. Breaking with tradition and with many thinkers after him, Descartes explicitly did not divide the incorporeal soul into parts, such as reason and intellect, describing them instead as one indivisible incorporeal entity.
A contemporary of Descartes, Thomas Hobbes described reason as a broader version of "addition and subtraction" which is not limited to numbers. [ 24 ] This understanding of reason is sometimes termed "calculative" reason. Similar to Descartes, Hobbes asserted that "No discourse whatsoever, can end in absolute knowledge of fact, past, or to come" but that "sense and memory" is absolute knowledge. [ 25 ]
In the late 17th century through the 18th century, John Locke and David Hume developed Descartes's line of thought still further. Hume took it in an especially skeptical direction, proposing that there could be no possibility of deducing relationships of cause and effect, and therefore no knowledge is based on reasoning alone, even if it seems otherwise. [ 26 ]
Hume famously remarked that, "We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." [ 27 ] Hume also took his definition of reason to unorthodox extremes by arguing, unlike his predecessors, that human reason is not qualitatively different from either simply conceiving individual ideas, or from judgments associating two ideas, [ 28 ] and that "reason is nothing but a wonderful and unintelligible instinct in our souls, which carries us along a certain train of ideas, and endows them with particular qualities, according to their particular situations and relations." [ 29 ] It followed from this that animals have reason, only much less complex than human reason.
In the 18th century, Immanuel Kant attempted to show that Hume was wrong by demonstrating that a " transcendental " self, or "I", was a necessary condition of all experience. Therefore, suggested Kant, on the basis of such a self, it is in fact possible to reason both about the conditions and limits of human knowledge. And so long as these limits are respected, reason can be the vehicle of morality, justice, aesthetics, theories of knowledge ( epistemology ), and understanding. [ citation needed ] [ 30 ]
In the formulation of Kant, who wrote some of the most influential modern treatises on the subject, the great achievement of reason ( German : Vernunft ) is that it is able to exercise a kind of universal law-making. Kant was able therefore to reformulate the basis of moral-practical, theoretical, and aesthetic reasoning on "universal" laws.
Here, practical reasoning is the self-legislating or self-governing formulation of universal norms , and theoretical reasoning is the way humans posit universal laws of nature . [ 31 ]
Under practical reason, the moral autonomy or freedom of people depends on their ability, by the proper exercise of that reason, to behave according to laws that are given to them. This contrasted with earlier forms of morality, which depended on religious understanding and interpretation, or on nature , for their substance. [ 32 ]
According to Kant, in a free society each individual must be able to pursue their goals however they see fit, as long as their actions conform to principles given by reason. He formulated such a principle, called the " categorical imperative ", which would justify an action only if it could be universalized:
Act only according to that maxim whereby you can, at the same time, will that it should become a universal law. [ 33 ]
In contrast to Hume, Kant insisted that reason itself (German Vernunft ) could be used to find solutions to metaphysical problems, especially the discovery of the foundations of morality. Kant claimed that these solutions could be found with his " transcendental logic ", which unlike normal logic is not just an instrument that can be used indifferently, as it was for Aristotle, but a theoretical science in its own right and the basis of all the others. [ 34 ]
According to Jürgen Habermas , the "substantive unity" of reason has dissolved in modern times, such that it can no longer answer the question "How should I live?" Instead, the unity of reason has to be strictly formal, or "procedural". He thus described reason as a group of three autonomous spheres (on the model of Kant's three critiques):
For Habermas, these three spheres are the domain of experts, and therefore need to be mediated with the " lifeworld " by philosophers. In drawing such a picture of reason, Habermas hoped to demonstrate that the substantive unity of reason, which in pre-modern societies had been able to answer questions about the good life, could be made up for by the unity of reason's formalizable procedures. [ 35 ]
Hamann , Herder , Kant , Hegel , Kierkegaard , Nietzsche , Heidegger , Foucault , Rorty , and many other philosophers have contributed to a debate about what reason means, or ought to mean. Some, like Kierkegaard, Nietzsche, and Rorty, are skeptical about subject-centred, universal, or instrumental reason, and even skeptical toward reason as a whole. Others, including Hegel, believe that it has obscured the importance of intersubjectivity , or "spirit" in human life, and they attempt to reconstruct a model of what reason should be.
Some thinkers, e.g. Foucault, believe there are other forms of reason, neglected but essential to modern life, and to our understanding of what it means to live a life according to reason. [ 13 ] Others suggest that there is not just one reason or rationality, but multiple possible systems of reason or rationality which may conflict (in which case there is no super-rational system one can appeal to in order to resolve the conflict). [ 36 ]
In the last several decades, a number of proposals have been made to "re-orient" this critique of reason, or to recognize the "other voices" or "new departments" of reason:
For example, in opposition to subject-centred reason, Habermas has proposed a model of communicative reason that sees it as an essentially cooperative activity, based on the fact of linguistic intersubjectivity . [ 37 ]
Nikolas Kompridis proposed a widely encompassing view of reason as "that ensemble of practices that contributes to the opening and preserving of openness" in human affairs, and a focus on reason's possibilities for social change. [ 38 ]
The philosopher Charles Taylor , influenced by the 20th century German philosopher Martin Heidegger , proposed that reason ought to include the faculty of disclosure , which is tied to the way we make sense of things in everyday life, as a new "department" of reason. [ 39 ]
In the essay "What is Enlightenment?", Michel Foucault proposed a critique based on Kant's distinction between "private" and "public" uses of reason: [ 40 ]
The terms logic or logical are sometimes used as if they were identical with reason or rational , or sometimes logic is seen as the most pure or the defining form of reason: "Logic is about reasoning—about going from premises to a conclusion. ... When you do logic, you try to clarify reasoning and separate good from bad reasoning." [ 41 ] In modern economics , rational choice is assumed to equate to logically consistent choice. [ 42 ]
However, reason and logic can be thought of as distinct—although logic is one important aspect of reason. Author Douglas Hofstadter , in Gödel, Escher, Bach , characterizes the distinction in this way: Logic is done inside a system while reason is done outside the system by such methods as skipping steps, working backward, drawing diagrams, looking at examples, or seeing what happens if you change the rules of the system. [ 43 ] Psychologists Mark H. Bickard and Robert L. Campbell argue that "rationality cannot be simply assimilated to logicality"; they note that "human knowledge of logic and logical systems has developed" over time through reasoning, and logical systems "can't construct new logical systems more powerful than themselves", so reasoning and rationality must involve more than a system of logic. [ 44 ] [ 45 ] Psychologist David Moshman, citing Bickhard and Campbell, argues for a " metacognitive conception of rationality" in which a person's development of reason "involves increasing consciousness and control of logical and other inferences". [ 45 ] [ 46 ]
Reason is a type of thought , and logic involves the attempt to describe a system of formal rules or norms of appropriate reasoning. [ 45 ] The oldest surviving writing to explicitly consider the rules by which reason operates are the works of the Greek philosopher Aristotle , especially Prior Analytics and Posterior Analytics . [ 47 ] [ non-primary source needed ] Although the Ancient Greeks had no separate word for logic as distinct from language and reason, Aristotle's newly coined word " syllogism " ( syllogismos ) identified logic clearly for the first time as a distinct field of study. [ 48 ] When Aristotle referred to "the logical" ( hē logikē ), he was referring more broadly to rational thought. [ 49 ]
As pointed out by philosophers such as Hobbes, Locke, and Hume, some animals are also clearly capable of a type of " associative thinking ", even to the extent of associating causes and effects. A dog once kicked, can learn how to recognize the warning signs and avoid being kicked in the future, but this does not mean the dog has reason in any strict sense of the word. It also does not mean that humans acting on the basis of experience or habit are using their reason. [ 29 ]
Human reason requires more than being able to associate two ideas—even if those two ideas might be described by a reasoning human as a cause and an effect—perceptions of smoke, for example, and memories of fire. For reason to be involved, the association of smoke and the fire would have to be thought through in a way that can be explained, for example as cause and effect. In the explanation of Locke , for example, reason requires the mental use of a third idea in order to make this comparison by use of syllogism . [ 50 ]
More generally, according to Charles Sanders Peirce , reason in the strict sense requires the ability to create and manipulate a system of symbols , as well as indices and icons , the symbols having only a nominal, though habitual, connection to either (for example) smoke or fire. [ 51 ] One example of such a system of symbols and signs is language .
The connection of reason to symbolic thinking has been expressed in different ways by philosophers. Thomas Hobbes described the creation of "Markes, or Notes of remembrance" as speech . [ 52 ] He used the word speech as an English version of the Greek word logos so that speech did not need to be communicated. [ 53 ] When communicated, such speech becomes language, and the marks or notes or remembrance are called " Signes " by Hobbes. Going further back, although Aristotle is a source of the idea that only humans have reason ( logos ), he does mention that animals with imagination, for whom sense perceptions can persist, come closest to having something like reasoning and nous , and even uses the word " logos " in one place to describe the distinctions which animals can perceive in such cases. [ 54 ]
Reason and imagination rely on similar mental processes . [ 55 ] Imagination is not only found in humans. Aristotle asserted that phantasia (imagination: that which can hold images or phantasmata ) and phronein (a type of thinking that can judge and understand in some sense) also exist in some animals. [ 56 ] According to him, both are related to the primary perceptive ability of animals, which gathers the perceptions of different senses and defines the order of the things that are perceived without distinguishing universals, and without deliberation or logos . But this is not yet reason, because human imagination is different.
Terrence Deacon and Merlin Donald , writing about the origin of language , connect reason not only to language , but also mimesis . [ 57 ] They describe the ability to create language as part of an internal modeling of reality , and specific to humankind. Other results are consciousness , and imagination or fantasy . In contrast, modern proponents of a genetic predisposition to language itself include Noam Chomsky and Steven Pinker . [ clarification needed ]
If reason is symbolic thinking, and peculiarly human, then this implies that humans have a special ability to maintain a clear consciousness of the distinctness of "icons" or images and the real things they represent. Merlin Donald writes: [ 58 ] : 172
A dog might perceive the "meaning" of a fight that was realistically play-acted by humans, but it could not reconstruct the message or distinguish the representation from its referent (a real fight).... Trained apes are able to make this distinction; young children make this distinction early—hence, their effortless distinction between play-acting an event and the event itself
In classical descriptions, an equivalent description of this mental faculty is eikasia , in the philosophy of Plato. [ 59 ] : Ch.5 This is the ability to perceive whether a perception is an image of something else, related somehow but not the same, and therefore allows humans to perceive that a dream or memory or a reflection in a mirror is not reality as such. What Klein refers to as dianoetic eikasia is the eikasia concerned specifically with thinking and mental images, such as those mental symbols, icons, signes , and marks discussed above as definitive of reason. Explaining reason from this direction: human thinking is special in that we often understand visible things as if they were themselves images of our intelligible "objects of thought" as "foundations" ( hypothēses in Ancient Greek). This thinking ( dianoia ) is "...an activity which consists in making the vast and diffuse jungle of the visible world depend on a plurality of more 'precise' noēta ". [ 59 ] : 122
Both Merlin Donald and the Socratic authors such as Plato and Aristotle emphasize the importance of mimēsis , often translated as imitation or representation . Donald writes: [ 58 ] : 169
Imitation is found especially in monkeys and apes [...but...] Mimesis is fundamentally different from imitation and mimicry in that it involves the invention of intentional representations.... Mimesis is not absolutely tied to external communication.
Mimēsis is a concept, now popular again in academic discussion, that was particularly prevalent in Plato's works. In Aristotle, it is discussed mainly in the Poetics . In Michael Davis's account of the theory of man in that work: [ 60 ]
It is the distinctive feature of human action, that whenever we choose what we do, we imagine an action for ourselves as though we were inspecting it from the outside. Intentions are nothing more than imagined actions, internalizings of the external. All action is therefore imitation of action; it is poetic... [ 61 ]
Donald, like Plato (and Aristotle, especially in On Memory and Recollection ), emphasizes the peculiarity in humans of voluntary initiation of a search through one's mental world. The ancient Greek anamnēsis , normally translated as "recollection" was opposed to mneme or "memory". Memory, shared with some animals, [ 62 ] requires a consciousness not only of what happened in the past, but also that something happened in the past, which is in other words a kind of eikasia [ 59 ] : 109 "...but nothing except man is able to recollect." [ 63 ] Recollection is a deliberate effort to search for and recapture something once known. Klein writes that, "To become aware of our having forgotten something means to begin recollecting." [ 59 ] : 112 Donald calls the same thing autocueing , which he explains as follows: [ 58 ] : 173 [ 64 ] "Mimetic acts are reproducible on the basis of internal, self-generated cues. This permits voluntary recall of mimetic representations, without the aid of external cues—probably the earliest form of representational thinking ."
In a celebrated paper, the fantasy author and philologist J.R.R. Tolkien wrote in his essay "On Fairy Stories" that the terms "fantasy" and "enchantment" are connected to not only "the satisfaction of certain primordial human desires" but also "the origin of language and of the mind". [ This quote needs a citation ]
A subdivision of philosophy and a variety of reasoning is logic . The traditional main division made in philosophy is between deductive reasoning and inductive reasoning . Formal logic has been described as the science of deduction . [ 65 ] The study of inductive reasoning is generally carried out within the field known as informal logic or critical thinking .
Deduction is a form of reasoning in which a conclusion follows necessarily from the stated premises. A deduction is also the name for the conclusion reached by a deductive reasoning process. A classic example of deductive reasoning is evident in syllogisms like the following:
The reasoning in this argument is deductively valid because there is no way in which both premises could be true and the conclusion be false.
Induction is a form of inference that produces properties or relations about unobserved objects or types based on previous observations or experiences , or that formulates general statements or laws based on limited observations of recurring phenomenal patterns.
Inductive reasoning contrasts with deductive reasoning in that, even in the strongest cases of inductive reasoning, the truth of the premises does not guarantee the truth of the conclusion. Instead, the conclusion of an inductive argument follows with some degree of probability . For this reason also, the conclusion of an inductive argument contains more information than is already contained in the premises. Thus, this method of reasoning is ampliative.
A classic example of inductive reasoning comes from the empiricist David Hume :
Analogical reasoning is a form of inductive reasoning from a particular to a particular. It is often used in case-based reasoning , especially legal reasoning. [ 66 ] An example follows:
Analogical reasoning is a weaker form of inductive reasoning from a single example, because inductive reasoning typically uses a large number of examples to reason from the particular to the general. [ 67 ] Analogical reasoning often leads to wrong conclusions. For example:
Abductive reasoning, or argument to the best explanation, is a form of reasoning that does not fit in either the deductive or inductive categories, since it starts with incomplete set of observations and proceeds with likely possible explanations. The conclusion in an abductive argument does not follow with certainty from its premises and concerns something unobserved. What distinguishes abduction from the other forms of reasoning is an attempt to favour one conclusion above others, by subjective judgement or by attempting to falsify alternative explanations or by demonstrating the likelihood of the favoured conclusion, given a set of more or less disputable assumptions. For example, when a patient displays certain symptoms, there might be various possible causes, but one of these is preferred above others as being more probable.
Flawed reasoning in arguments is known as fallacious reasoning . Bad reasoning within arguments can result from either a formal fallacy or an informal fallacy .
Formal fallacies occur when there is a problem with the form, or structure, of the argument. The word "formal" refers to this link to the form of the argument. An argument that contains a formal fallacy will always be invalid.
An informal fallacy is an error in reasoning that occurs due to a problem with the content , rather than the form or structure, of the argument.
In law relating to the actions of an employer or a public body , a decision or action which falls outside the range of actions or decision available when acting in good faith can be described as "unreasonable". Use of the term is considered in the English law cases of Short v Poole Corporation (1926), Associated Provincial Picture Houses Ltd v Wednesbury Corporation (1947) and Braganza v BP Shipping Limited (2015). [ 68 ]
Philosophy is often characterized as a pursuit of rational understanding, entailing a more rigorous and dedicated application of human reasoning than commonly employed. Philosophers have long debated two fundamental questions regarding reason, essentially examining reasoning itself as a human endeavor, or philosophizing about philosophizing. The first question delves into whether we can place our trust in reason's ability to attain knowledge and truth more effectively than alternative methods. The second question explores whether a life guided by reason, a life that aims to be guided by reason, can be expected to lead to greater happiness compared to other approaches to life.
Since classical antiquity a question has remained constant in philosophical debate (sometimes seen as a conflict between Platonism and Aristotelianism ) concerning the role of reason in confirming truth . People use logic, deduction , and induction to reach conclusions they think are true. Conclusions reached in this way are considered, according to Aristotle, more certain than sense perceptions on their own. [ 69 ] On the other hand, if such reasoned conclusions are only built originally upon a foundation of sense perceptions, then our most logical conclusions can never be said to be certain because they are built upon the very same fallible perceptions they seek to better. [ 70 ]
This leads to the question of what types of first principles , or starting points of reasoning, are available for someone seeking to come to true conclusions. In Greek, " first principles " are archai , "starting points", [ 71 ] and the faculty used to perceive them is sometimes referred to in Aristotle [ 72 ] and Plato [ 73 ] as nous which was close in meaning to awareness or consciousness . [ 74 ]
Empiricism (sometimes associated with Aristotle [ 75 ] but more correctly associated with British philosophers such as John Locke and David Hume , as well as their ancient equivalents such as Democritus ) asserts that sensory impressions are the only available starting points for reasoning and attempting to attain truth. This approach always leads to the controversial conclusion that absolute knowledge is not attainable. Idealism , (associated with Plato and his school), claims that there is a "higher" reality, within which certain people can directly discover truth without needing to rely only upon the senses, and that this higher reality is therefore the primary source of truth.
Philosophers such as Plato , Aristotle , Al-Farabi , Avicenna , Averroes , Maimonides , Aquinas , and Hegel are sometimes said [ by whom? ] to have argued that reason must be fixed and discoverable—perhaps by dialectic, analysis, or study. In the vision of these thinkers, reason is divine or at least has divine attributes. Such an approach allowed religious philosophers such as Thomas Aquinas and Étienne Gilson to try to show that reason and revelation are compatible. According to Hegel, "...the only thought which Philosophy brings with it to the contemplation of History , is the simple conception of reason; that reason is the Sovereign of the World; that the history of the world, therefore, presents us with a rational process." [ 76 ]
Since the 17th century rationalists , reason has often been taken to be a subjective faculty , or rather the unaided ability ( pure reason ) to form concepts. For Descartes , Spinoza , and Leibniz , this was associated with mathematics . Kant attempted to show that pure reason could form concepts ( time and space ) that are the conditions of experience. Kant made his argument in opposition to Hume, who denied that reason had any role to play in experience.
After Plato and Aristotle, western literature often treated reason as being the faculty that trained the passions and appetites. [ citation needed ] Stoic philosophy , by contrast, claimed most emotions were merely false judgements. [ 77 ] [ 78 ] According to the Stoics the only good is virtue, and the only evil is vice, therefore emotions that judged things other than vice to be bad (such as fear or distress), or things other than virtue to be good (such as greed) were simply false judgements and should be discarded (though positive emotions based on true judgements, such as kindness, were acceptable). [ 77 ] [ 78 ] [ 79 ] After the critiques of reason in the early Enlightenment the appetites were rarely discussed or were conflated with the passions. [ citation needed ] Some Enlightenment camps took after the Stoics to say reason should oppose passion rather than order it, while others like the Romantics believed that passion displaces reason, as in the maxim "follow your heart". [ citation needed ]
Reason has been seen as cold, an "enemy of mystery and ambiguity", [ 80 ] a slave, or judge, of the passions, notably in the work of David Hume . More recently, Freud wrote, “It seems as though the activity of the other agencies of the mind is able only to modify the pleasure principle but not to nullify it; and it remains a question of the greatest theoretical importance, and one that has not yet been answered, when and how it is ever possible for the pleasure principle to be overcome.” [ 81 ]
Reasoning that claims the object of a desire is demanded by logic alone is called rationalization . [ citation needed ]
Rousseau first proposed, in his second Discourse , that reason and political life is not natural and is possibly harmful to mankind. [ 82 ] He asked what really can be said about what is natural to mankind. What, other than reason and civil society, "best suits his constitution"? Rousseau saw "two principles prior to reason" in human nature. First we hold an intense interest in our own well-being. Secondly we object to the suffering or death of any sentient being, especially one like ourselves. [ 83 ] These two passions lead us to desire more than we could achieve. We become dependent upon each other, and on relationships of authority and obedience. This effectively puts the human race into slavery. Rousseau says that he almost dares to assert that nature does not destine men to be healthy. According to Richard Velkley , "Rousseau outlines certain programs of rational self-correction, most notably the political legislation of the Contrat Social and the moral education in Émile . All the same, Rousseau understands such corrections to be only ameliorations of an essentially unsatisfactory condition, that of socially and intellectually corrupted humanity." [ This quote needs a citation ]
This quandary presented by Rousseau led to Kant 's new way of justifying reason as freedom to create good and evil. These therefore are not to be blamed on nature or God. In various ways, German Idealism after Kant, and major later figures such Nietzsche , Bergson , Husserl , Scheler , and Heidegger , remain preoccupied with problems coming from the metaphysical demands or urges of reason. [ 84 ] Rousseau and these later writers also exerted a large influence on art and politics. Many writers (such as Nikos Kazantzakis ) extol passion and disparage reason. In politics modern nationalism comes from Rousseau's argument that rationalist cosmopolitanism brings man ever further from his natural state. [ 85 ]
In Descartes' Error , Antonio Damasio presents the " Somatic Marker Hypothesis " which states that emotions guide behavior and decision-making. Damasio argues that these somatic markers (known collectively as "gut feelings") are "intuitive signals" that direct our decision making processes in a certain way that cannot be solved with rationality alone. Damasio further argues that rationality requires emotional input in order to function.
There are many religious traditions, some of which are explicitly fideist and others of which claim varying degrees of rationalism . Secular critics sometimes accuse all religious adherents of irrationality; they claim such adherents are guilty of ignoring, suppressing, or forbidding some kinds of reasoning concerning some subjects (such as religious dogmas, moral taboos, etc.). [ 86 ] Though theologies and religions such as classical monotheism typically do not admit to being irrational , there is often a perceived conflict or tension between faith and tradition on the one hand, and reason on the other, as potentially competing sources of wisdom , law , and truth . [ 74 ] [ 87 ]
Religious adherents sometimes respond by arguing that faith and reason can be reconciled, or have different non-overlapping domains, or that critics engage in a similar kind of irrationalism:
Some commentators have claimed that Western civilization can be almost defined by its serious testing of the limits of tension between "unaided" reason and faith in " revealed " truths—figuratively summarized as Athens and Jerusalem , respectively. [ 93 ] Leo Strauss spoke of a "Greater West " that included all areas under the influence of the tension between Greek rationalism and Abrahamic revelation, including the Muslim lands. He was particularly influenced by the Muslim philosopher Al-Farabi . To consider to what extent Eastern philosophy might have partaken of these important tensions, Strauss thought it best to consider whether dharma or tao may be equivalent to Nature ( physis in Greek). According to Strauss the beginning of philosophy involved the "discovery or invention of nature" and the "pre-philosophical equivalent of nature" was supplied by "such notions as 'custom' or 'ways ' ", which appear to be really universal in all times and places. The philosophical concept of nature or natures as a way of understanding archai (first principles of knowledge) brought about a peculiar tension between reasoning on the one hand, and tradition or faith on the other. [ 74 ]
Scientific research into reasoning is carried out within the fields of psychology and cognitive science . Psychologists attempt to determine whether or not people are capable of rational thought in a number of different circumstances.
Assessing how well someone engages in reasoning is the project of determining the extent to which the person is rational or acts rationally. It is a key research question in the psychology of reasoning and cognitive science of reasoning. Rationality is often divided into its respective theoretical and practical counterparts .
Experimental cognitive psychologists research reasoning behaviour. Such research may focus, for example, on how people perform on tests of reasoning such as intelligence or IQ tests, or on how well people's reasoning matches ideals set by logic (see, for example, the Wason test ). [ 94 ] Experiments examine how people make inferences from conditionals like if A then B and how they make inferences about alternatives like A or else B . [ 95 ] They test whether people can make valid deductions about spatial and temporal relations like A is to the left of B or A happens after B , and about quantified assertions like all the A are B . [ 96 ] Experiments investigate how people make inferences about factual situations, hypothetical possibilities, probabilities, and counterfactual situations. [ 97 ]
Developmental psychologists investigate the development of reasoning from birth to adulthood. Piaget's theory of cognitive development was the first complete theory of reasoning development. Subsequently, several alternative theories were proposed, including the neo-Piagetian theories of cognitive development . [ 98 ]
The biological functioning of the brain is studied by neurophysiologists , cognitive neuroscientists , and neuropsychologists . This includes research into the structure and function of normally functioning brains, as well as of damaged or otherwise unusual brains. In addition to carrying out research into reasoning, some psychologists—for example clinical psychologists and psychotherapists —work to alter people's reasoning habits when those habits are unhelpful.
In artificial intelligence and computer science , scientists study and use automated reasoning for diverse applications including automated theorem proving the formal semantics of programming languages , and formal specification in software engineering .
Meta-reasoning is reasoning about reasoning. In computer science, a system performs meta-reasoning when reasoning about its operation. [ 99 ] This requires a programming language capable of reflection , the ability to observe and modify its own structure and behaviour.
A species could benefit greatly from better abilities to reason about, predict, and understand the world. French social and cognitive scientists Dan Sperber and Hugo Mercier argue that, aside from these benefits, other forces could have been driving the evolution of reason. They point out that reasoning is very difficult for humans to do effectively, and that it is hard for individuals to doubt their own beliefs ( confirmation bias ). Reasoning is most effective when done as a collective—as demonstrated by the success of projects like science . They suggest that there are pressures not just individual, but group selection at play. Any group that managed to find ways of reasoning effectively would reap benefits for all its members, increasing their fitness . This could also help explain why humans, according to Sperber, are not optimized to reason effectively alone. Sperber's & Mercier's argumentative theory of reasoning claims that reason may have more to do with winning arguments than searching for the truth. [ 100 ]
Aristotle famously described reason (with language) as a part of human nature , because of which it is best for humans to live "politically" meaning in communities of about the size and type of a small city state ( polis in Greek). For example:
It is clear, then, that a human being is more of a political [ politikon = of the polis ] animal [ zōion ] than is any bee or than any of those animals that live in herds. For nature, as we say, makes nothing in vain, and humans are the only animals who possess reasoned speech [ logos ]. Voice, of course, serves to indicate what is painful and pleasant; that is why it is also found in other animals, because their nature has reached the point where they can perceive what is painful and pleasant and express these to each other. But speech [ logos ] serves to make plain what is advantageous and harmful and so also what is just and unjust. For it is a peculiarity of humans, in contrast to the other animals, to have perception of good and bad, just and unjust, and the like; and the community in these things makes a household or city [ polis ].... By nature, then, the drive for such a community exists in everyone, but the first to set one up is responsible for things of very great goodness. For as humans are the best of all animals when perfected, so they are the worst when divorced from law and right. The reason is that injustice is most difficult to deal with when furnished with weapons, and the weapons a human being has are meant by nature to go along with prudence and virtue, but it is only too possible to turn them to contrary uses. Consequently, if a human being lacks virtue, he is the most unholy and savage thing, and when it comes to sex and food, the worst. But justice is something political [to do with the polis ], for right is the arrangement of the political community, and right is discrimination of what is just. [ 101 ] : I.2, 1253a
If human nature is fixed in this way, we can define what type of community is always best for people. This argument has remained a central argument in all political, ethical, and moral thinking since then, and has become especially controversial since firstly Rousseau 's Second Discourse, and secondly, the Theory of Evolution . Already in Aristotle there was an awareness that the polis had not always existed and had to be invented or developed by humans themselves. The household came first, and the first villages and cities were just extensions of that, with the first cities being run as if they were still families with Kings acting like fathers. [ 101 ] : I.2, 1252b15
Friendship seems to prevail in man and woman according to nature [ kata phusin ]; for people are by nature [ tēi phusei ] pairing more than political [ politikon ], in as much as the household [ oikos ] is prior and more necessary than the polis and making children is more common [ koinoteron ] with the animals. In the other animals, community [ koinōnia ] goes no further than this, but people live together [ sumoikousin ] not only for the sake of making children, but also for the things for life; for from the start the functions [ erga ] are divided, and are different for man and woman. Thus they supply each other, putting their own into the common [ eis to koinon ]. It is for these reasons that both utility and pleasure seem to be found in this kind of friendship. [ 6 ] : VIII.12
Rousseau in his Second Discourse finally took the shocking step of claiming that this traditional account has things in reverse: with reason, language, and rationally organized communities all having developed over a long period of time merely as a result of the fact that some habits of cooperation were found to solve certain types of problems, and that once such cooperation became more important, it forced people to develop increasingly complex cooperation—often only to defend themselves from each other.
In other words, according to Rousseau, reason, language, and rational community did not arise because of any conscious decision or plan by humans or gods, nor because of any pre-existing human nature. As a result, he claimed, living together in rationally organized communities like modern humans is a development with many negative aspects compared to the original state of man as an ape. If anything is specifically human in this theory, it is the flexibility and adaptability of humans. This view of the animal origins of distinctive human characteristics later received support from Charles Darwin 's Theory of Evolution .
The two competing theories concerning the origins of reason are relevant to political and ethical thought because, according to the Aristotelian theory, a best way of living together exists independently of historical circumstances. According to Rousseau, we should even doubt that reason, language, and politics are a good thing, as opposed to being simply the best option given the particular course of events that led to today. Rousseau's theory, that human nature is malleable rather than fixed, is often taken to imply (for example by Karl Marx ) a wider range of possible ways of living together than traditionally known.
However, while Rousseau's initial impact encouraged bloody revolutions against traditional politics, including both the French Revolution and the Russian Revolution , his own conclusions about the best forms of community seem to have been remarkably classical, in favor of city-states such as Geneva , and rural living . | https://en.wikipedia.org/wiki/Reason |
In the most general terms, a reason is a consideration in an argument which justifies or explains an action, a belief , an attitude , or a fact . [ 1 ]
Normative reasons are what people appeal to when making arguments about what people should do or believe. For example, that a doctor's patient is grimacing is a reason to believe the patient is in pain. That the patient is in pain is a reason for the doctor to do things to alleviate the pain.
Explanatory reasons are explanations of why things happened. For example, the reason the patient is in pain is that her nerves are sending signals from her tissues to her brain.
A reason, in many cases, is brought up by the question "why?", and answered following the word because . Additionally, words and phrases such as since , due to , as , considering ( that ), a result ( of ), and in order to , for example, all serve as explanatory locutions that precede the reason to which they refer.
In philosophy, it is common to distinguish between three kinds of reason. [ 2 ]
Normative or justifying reasons are often said to be "considerations which count in favor" of some state of affairs (this is, at any rate, a common view, notably held by T. M. Scanlon and Derek Parfit ). [ 3 ] [ 4 ]
Explanatory reasons are considerations which serve to explain why things have happened or why states of affairs are the way they are. In other words, "reason" can also be a synonym for " cause ". For example, a reason a car starts is that its ignition is turned. In the context of explaining the actions of beings who act for reasons (i.e., rational agents ), these are called motivating reasons —e.g., the reason Bill went to college was to learn; i.e., that he would learn was his motivating reason. At least where a rational agent is acting rationally, her motivating reasons are those considerations which she believes count in favor of her so acting. [ citation needed ]
Some philosophers (one being John Broome [ 5 ] ) view normative reasons as the same as "explanations of ought facts". Just as explanatory reasons explain why some descriptive fact obtains (or came to obtain), normative reasons on this view explain why some normative facts obtain, i.e., they explain why some state of affairs ought to come to obtain (e.g., why someone should act or why some event ought to take place).
Philosophers, when discussing reasoning that is influenced by norms , commonly make a distinction between theoretical reason and practical reason . [ 6 ] These are capacities that draw on epistemic reasons (matters of fact and of explanation) or practical reasons (reasons for action) respectively. Epistemic reasons (also called theoretical or evidential reasons ) are considerations which count in favor of believing some proposition to be true. Practical reasons are considerations which count in favor of some action or the having of some attitude (or at least, count in favor of wanting or trying to bring those actions or attitudes about).
In informal logic , a reason consists of either a single premise or co-premises in support of an argument . In formal symbolic logic , only single premises occur. In informal reasoning, two types of reasons exist. An evidential reason is a foundation upon which to believe that or why a claim is true. An explanatory reason attempts to convince someone how something is or could be true, but does not directly convince one that it is true. | https://en.wikipedia.org/wiki/Reason_(argument) |
Beyond ( a ) reasonable doubt is a legal standard of proof required to validate a criminal conviction in most adversarial legal systems . [ 1 ] It is a higher standard of proof than the standard of balance of probabilities (US English: preponderance of the evidence) commonly used in civil cases , reflecting the principle that in criminal cases the stakes are significantly higher: a person found guilty can be deprived of liberty or, in extreme cases, life itself, in addition to the collateral consequences and social stigma attached to conviction. The prosecution bears the burden of presenting compelling evidence that establishes guilt beyond a reasonable doubt; if the trier of fact is not convinced to that standard, the accused is entitled to an acquittal . Originating in part from the principle sometimes called Blackstone's ratio —“It is better that ten guilty persons escape than that one innocent suffer”—the standard is now widely accepted in criminal justice systems throughout common law jurisdictions.
Because the defendant is presumed innocent , prosecutors must prove each element of the crime charged beyond a reasonable doubt in order to obtain a conviction. [ 2 ] [ 3 ] This means the evidence must leave little actual doubt in the mind of the judge or jury that the defendant committed the alleged offense. [ 4 ] Unreasonable or purely speculative doubts are excluded, whereas doubts grounded in tangible conflicts within the evidence or its sufficiency warrant an acquittal. In many jurisdictions, the phrase “reasonable doubt” remains purposefully undefined in jury instructions to reduce confusion, although critics argue that the lack of a clear definition may itself cause confusion. [ 5 ]
Academic literature has identified several possible interpretations of what “reasonable doubt” entails. [ 5 ] One approach focuses on whether a doubt can be articulable , meaning grounded in a coherent reason or an alternative narrative, rather than in vague distrust or pure speculation. Critics note, however, that this risks shifting the burden of proof to the defendant if they must articulate reasons to doubt guilt. Another approach frames the inquiry around whether a “reasonable person” would entertain the doubt. But critics observe that this easily becomes circular: a doubt is “reasonable” if a “reasonable” person would hold it, offering little additional guidance. A third, so-called “probabilistic” approach suggests adopting a numerical threshold (e.g., 90% or 95% certainty). Some scholars contend that such explicit quantification reflects the actual logic behind proof standards and is consistent with longstanding principles about balancing the costs of wrongful convictions and wrongful acquittals. [ 5 ]
Most legal systems avoid placing an explicit numerical figure on “reasonable doubt” and rely instead on jurors’ or judges’ subjective judgment; however, empirical studies show that laypeople vary widely in the probability threshold they associate with “beyond a reasonable doubt.” [ 6 ] [ 7 ] Some scholars have proposed that a probabilistic or numerical approach—e.g., equating “reasonable doubt” to a particular probability threshold—can mitigate these inconsistencies. [ 5 ]
Critics of the standard, including some jurists and legal scholars, point out that the instruction “beyond a reasonable doubt” can be circular: it does not clarify how certain the jury must be, only that it must be “more certain” than other standards (such as preponderance of the evidence). [ 8 ] Various courts have tried to elaborate with phrases such as “the kind of doubt that would make a person hesitate to act,” or “moral certainty,” but these have often been deemed unhelpful or potentially confusing. [ 5 ]
Research suggests that where the law intends a clear distinction between “preponderance,” “clear and convincing evidence,” and “beyond a reasonable doubt,” jurors given only verbal formulations struggle to separate these levels in practice. Studies of mock jurors have found no consistent difference in outcomes under purely verbal instructions. By contrast, instructions incorporating some numerical guidance produce more consistent results. [ 9 ] [ 5 ]
In England and Wales, the modern practice often avoids the phrase “beyond reasonable doubt” in favor of telling jurors they must be “sure” of the defendant’s guilt. This rewording follows appeals court rulings expressing concern that the traditional formula might confuse jurors. [ 10 ] In Woolmington v DPP (1935), the House of Lords famously declared that there is a “golden thread” running through English criminal law: the burden of proof is on the prosecution at all times. [ 11 ]
The Supreme Court of Canada has emphasized that jurors should be told the prosecution bears the entire burden and that the doubt must be based on reason and common sense. In R. v. Lifchus, the Court advised jurors that absolute certainty is not required, only that they be “sure” based on the evidence, and that proof of probable guilt is insufficient. [ 12 ] Later cases, such as R. v. Starr, clarified that “proof beyond a reasonable doubt” lies much closer to absolute certainty than to a balance of probabilities. [ 13 ]
In the United States, the notion that an accused must be found guilty “beyond a reasonable doubt” is constitutionally mandated under the Due Process Clause. [ 14 ] Although the Supreme Court has discussed this standard in several decisions, it has resisted providing a strict definition; indeed, it has stated that attempts to define the term often “do not usually result in making it any clearer to the minds of the jury.” [ 15 ] Critics argue that juror misunderstanding of what “reasonable doubt” requires contributes to inconsistent outcomes and complicates the fairness of the justice system. [ 5 ]
Many proposals to quantify “beyond a reasonable doubt” draw upon the so-called Blackstonian ratio—for example, equating “It is better that ten guilty persons escape than that one innocent suffer” to a 90% threshold of certainty. However, courts vary substantially in how they refer to or adopt Blackstone’s formulation. Some imply a ratio of 1:5 or 1:10, while others have cited values as high as 1:99, resulting in no single uniform benchmark nationwide. Recent scholarship has attempted to catalogue the implicit level of certainty state-by-state. [ 5 ]
Civil law jurisdictions often employ a similar requirement that the judge’s conviction be “intimate” or thorough, although explicit percentages are likewise generally avoided. Japan also uses a high standard of persuasion in criminal cases, influenced by the principle of in dubio pro reo (when in doubt, rule for the accused), but judges sometimes diverge in how strictly they apply it. [ 16 ] | https://en.wikipedia.org/wiki/Reasonable_doubt |
In law, a reasonable person or reasonable man is a hypothetical person whose character and care conduct, under any common set of facts, is decided through reasoning of good practice or policy. [ 1 ] [ 2 ] It is a legal fiction [ 3 ] crafted by the courts and communicated through case law and jury instructions . [ 4 ] In some practices, for circumstances arising from an uncommon set of facts, [ 2 ] this person represents a composite of a relevant community's judgement as to how a typical member of that community should behave in situations that might pose a threat of harm (through action or inaction) to the public. [ 5 ]
The reasonable person is used as a tool to standardize, teach law students, or explain the law to a jury. [ 4 ] The reasonable person belongs to a family of hypothetical figures in law including: the "right-thinking member of society", the " officious bystander ", the "reasonable parent", the "reasonable landlord", the "fair-minded and informed observer", the " person having ordinary skill in the art " in patent law . Ancient predecessors of the reasonable person include the bonus pater familias (the good family father) of ancient Rome, [ 6 ] the bonus vir (the good man) and spoudaios (the earnest person) in ancient Greece as well as the geru maa (the silent person) in ancient Egypt. [ 7 ]
While there is a loose consensus on its meaning in black letter law , there is no accepted technical definition, and the "reasonable person" is an emergent concept of common law . The reasonable person is not an average person or a typical person, leading to difficulties in applying the concept in some criminal cases, especially in regard to the partial defence of provocation. [ 8 ] Most recently, Valentin Jeutner has argued that it matters less whether the reasonable person is reasonable, officious or diligent but rather that the most important characteristic of the reasonable person is that they are another person. [ 9 ] As with legal fiction in general, it is somewhat susceptible to ad hoc manipulation or transformation. Strictly according to the fiction, it is misconceived for a party to seek evidence from actual people to establish how someone would have acted or what he would have foreseen. [ 6 ] [ 3 ] However, changes in the standard may be "learned" by high courts over time if there is a compelling consensus of public opinion. [ 1 ] [ 2 ]
The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances. [ 10 ] [ 11 ] While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself. [ 12 ] [ 13 ] The standard does not exist independently of other circumstances within a case that could affect an individual's judgement. In cases resulting in judgment notwithstanding verdict , a vetted jury's composite judgment can be deemed beyond that of the reasonable person, and thus overruled.
The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law —that is, criminal negligence —and tort law. The standard is also used in contract law, [ 14 ] to determine contractual intent, or (when there is a duty of care ) whether there has been a breach of the standard of care . The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties. [ 15 ] During the Nuremberg Trials , Sir David Maxwell Fyfe introduced the standard of the reasonable person to international law. [ 16 ] Nowadays known as the standard of the 'reasonable military commander', international courts use it to assess the conduct of military officers in times of war. [ 17 ]
The "reasonable man" appeared in Richard Hooker 's defence of conservatism in religion, the Laws of Ecclesiastical Polity (1594-7), where he preferred Papists to Turks and accepted the opinions of religious experts when there was no reason to dissent. [ 18 ]
In 1835, Adolphe Quetelet detailed the characteristics of l'homme moyen ( French , "average man"). His work is translated into English several ways. As a result, some authors pick "average man", "common man", "reasonable man", or stick to the original " l'homme moyen ". Quetelet was a Belgian astronomer , mathematician , statistician and sociologist . He documented the physical characteristics of man on a statistical basis and discussed man's motivations when acting in society. [ 19 ]
Two years later, the "reasonable person" made his first appearance in the English case of Vaughan v. Menlove (1837). [ 20 ] In Menlove , the defendant had stacked hay on his rental property in a manner prone to spontaneous ignition. After he had been repeatedly warned over the course of five weeks, the hay ignited and burned the defendant's barns and stable and then spread to the landlord's two cottages on the adjacent property. Menlove's attorney admitted his client's "misfortune of not possessing the highest order of intelligence," arguing that negligence should only be found if the jury decided Menlove had not acted with " bona fide [and] to the best of his [own] judgment."
The Menlove court disagreed, reasoning that such a standard would be too subjective, instead preferring to set an objective standard for adjudicating cases:
The care taken by a prudent man has always been the rule laid down; and as to the supposed difficulty of applying it, a jury has always been able to say, whether, taking that rule as their guide, there has been negligence on the occasion in question. Instead, therefore, of saying that the liability for negligence should be co-extensive with the judgment of each individual, which would be as variable as the length of the foot of each individual, we ought rather to adhere to the rule which requires in all cases a regard to caution such as a man of ordinary prudence would observe. That was, in substance, the criterion presented to the jury in this case and, therefore, the present rule must be discharged.
English courts upheld the standard again nearly 20 years later in Blyth v. Company Proprietors of the Birmingham Water Works . [ 21 ] In the case, Sir Edward Hall Alderson held: [ 22 ]
Negligence is the omission to do something which a reasonable man, guided upon those considerations which ordinarily regulate the conduct of human affairs, would do, or doing something which a prudent and reasonable man would not do.
American jurist Oliver Wendell Holmes Jr. explained the theory behind the reasonable person standard as stemming from the impossibility of "measuring a man's powers and limitations." [ 23 ] Individual, personal quirks inadvertently injuring the persons or property of others are no less damaging than intentional acts. For society to function, "a certain average of conduct, a sacrifice of individual peculiarities going beyond a certain point, is necessary to the general welfare." [ 23 ] Thus, a reasonable application of the law is sought, compatible with planning, working, or getting along with others. As such, "his neighbors accordingly require him, at his proper peril, to come up to their standard, and the courts which they establish decline to take his personal equation into account." [ 23 ] He heralded the reasonable person as a legal fiction whose care conduct under any common set of facts, is chosen—or "learned" permitting there is a compelling consensus of public opinion—by the courts. [ 1 ] [ 2 ]
The reasonable person standard, contrary to popular conception, is intentionally distinct from that of the "average person," who is not necessarily guaranteed to always be reasonable. [ 24 ] The reasonable person will weigh all of the following factors before acting:
Taking such actions requires the reasonable person to be appropriately informed, capable, aware of the law, and fair-minded. Such a person might do something extraordinary in certain circumstances, but whatever that person does or thinks, it is always reasonable.
The reasonable person has been called an "excellent but odious character." [ 25 ]
He is an ideal, a standard, the embodiment of all those qualities which we demand of the good citizen ... [he] invariably looks where he is going, ... is careful to examine the immediate foreground before he executes a leap or bound; ... neither stargazes nor is lost in meditation when approaching trapdoors or the margins of a dock; ... never mounts a moving [bus] and does not alight from any car while the train is in motion, ... uses nothing except in moderation, and even flogs his child in meditating only on the golden mean . [ 26 ]
English legal scholar Percy Henry Winfield summarized much of the literature by observing that:
[H]e has not the courage of Achilles, the wisdom of Ulysses or the strength of Hercules, nor has he the prophetic vision of a clairvoyant. He will not anticipate folly in all its forms but he never puts out of consideration the teachings of experience and so will guard against negligence of others when experience shows such negligence to be common. He is a reasonable man but not a perfect citizen, nor a "paragon of circumspection. ..." [ 27 ]
Under United States common law, a well known—though nonbinding—test for determining how a reasonable person might weigh the criteria listed above was set down in United States v. Carroll Towing Co. [ 28 ] in 1947 by the Chief Judge of the U.S. Court of Appeals for the Second Circuit, Learned Hand . The case concerned a barge that had broken her mooring with the dock. Writing for the court, Hand said:
[T]he owner's duty, as in other similar situations, to provide against resulting injuries is a function of three variables: (1) The probability that she will break away; (2) the gravity of the resulting injury, if she does; (3) the burden of adequate precautions.
While the test offered by Hand does not encompass all the criteria available above, juries in a negligence case might well still be instructed to take the other factors into consideration in determining whether the defendant was negligent. [ 29 ]
The Sedona Conference issued its Commentary on a Reasonable Security Test to advance the Hand Rule for a cybersecurity context. [ 30 ] The commentary adds three important articulations to the Hand Rule; a person is reasonable if no alternative safeguard would have provided an added benefit that was greater than the added burden, the utility of the risk should be considered as a factor in the calculation (as either a cost or a benefit, depending on the situation), and both qualitative and quantitative factors may be used in the test. [ 31 ]
The legal fiction [ 3 ] of the reasonable person is an ideal, as nobody is perfect. Everyone has limitations [ clarification needed ] , so the standard requires only that people act similarly to how "a reasonable person under the circumstances" would, as if their limitations were themselves circumstances. [ citation needed ] As such, courts require that the reasonable person be viewed as having the same limitations as the defendant.
For example, a disabled defendant is held to a standard that represents how a reasonable person with that same disability would act. [ 32 ] This is no excuse for poor judgment, or trying to act beyond one's abilities. Were it so, there would be as many standards as there were defendants; and courts would spend innumerable hours, [ citation needed ] and the parties much more money, on determining that particular defendant's reasonableness, character, and intelligence [ clarify ] . [ citation needed ]
By using the reasonable person standard, courts instead use an objective tool [ weasel words ] and avoid such subjective evaluations. [ citation needed ] The result is a standard that allows the law to behave in a uniform, foreseeable, and neutral manner [ weasel words ] [ citation needed ] when attempting to determine liability. [ dubious – discuss ]
One broad allowance made to the reasonable person standard is for children. The standard here requires that a child act in a similar manner to how a "reasonable person of like age, intelligence, and experience under like circumstances" would act. [ 33 ] In many common law systems, children under the age of 6 or 7 are typically exempt from any liability, whether civil or criminal, as they are deemed to be unable to understand the risk involved in their actions. This is called the defense of infancy : in Latin, doli incapax. [ citation needed ] In some jurisdictions, one of the exceptions to these allowances concern children engaged in what is primarily considered to be high-risk adult activity, such as operating a motor vehicle, [ 34 ] [ 35 ] and in some jurisdictions, children can also be " tried as an adult " for serious crimes, such as murder , which causes the court to disregard the defendant's age. [ citation needed ]
The reasonable person standard makes no allowance for the mentally ill. [ 36 ] Such a refusal goes back to the standard set in Menlove , where Menlove's attorney argued for the subjective standard . In the 170 years since, the law has kept to the legal judgment of having only the single, objective standard. Such judicial adherence sends a message that the mentally ill would do better to refrain from taking risk-creating actions, unless they exercise a heightened degree of self-restraint and precaution, if they intend to avoid liability.
Generally, the courts have reasoned that by not accepting mental illness as a bar to recovery, a potentially liable third party, such as a caregiver, will be more likely to protect the public. The courts have also stated the reason that members of the public are unable to identify a mentally ill person, as they can a child or someone with a physical disability.
When a person attempts a skilful act, the "reasonable person under the circumstances" test is elevated to a standard of whether the person acted how a "reasonable professional under the circumstances" would have, whether or not that person is actually a professional, has training, or has experience. [ 37 ] Other factors also become relevant, such as the degree to which a professional is educated (i.e., whether a specialist within the specific field, or just a general practitioner of the trade), and customary practices and general procedures of similar professionals. However, such other relevant factors are never dispositive.
Some professions may maintain a custom or practice long after a better method has become available. The new practices, though less risky, may be entirely ignored. In such cases, the practitioner may very well have acted unreasonably despite following custom or general practices. [ 38 ]
In healthcare, plaintiffs must prove via expert testimony the standard of medical care owed and a departure from that standard. The only exception to the requirement of expert testimony is where the departure from accepted medical practices was so egregious that a layperson can readily recognize the departure. [ 39 ]
However, controversial medical practices can be deemed reasonable when followed by a respected and reputable minority of the medical field, [ 40 ] or where the medical profession cannot agree over which practices are best. [ 41 ]
The "reasonable officer" standard is a method often applied to law enforcement and other armed professions to help determine if a use of force was excessive. The test is whether an appropriately trained professional, knowing what the officer knew at the time and following guidelines (such as a force continuum ), would have used the same level of force or higher. If the level of force is justified, the quantity of force is usually presumed to have been necessary unless there are other factors. For example, if a trained police officer was justified in fatally shooting a suspect, the number of shots is presumed to have been necessary barring other factors, such as a reckless disregard of others' safety or that additional force was used when the suspect was no longer a threat.
When anyone undertakes a skilful task that creates a risk to others, that person is held to the minimum standard of how a reasonable person experienced in that task would act, [ 42 ] regardless of their actual level of experience. [ 35 ] [ 43 ]
Factors beyond the defendant's control are always relevant. Additionally, so is the context within which each action is made. Many things affect how a person acts: individual perceptions, knowledge, the weather, etc. The standard of care required depends on the circumstances, but is always that which is reasonable. [ 44 ]
While community customs may be relied upon to indicate what kind of action is expected in the circumstances, these are not themselves conclusive of what a reasonable person would do. [ 24 ] [ 45 ]
It is precisely for this wide-ranging variety of possible facts that the reasonable person standard is so broad (and often confusing and difficult to apply). However, a few general areas of relevant circumstances rise above the others.
Allowing for circumstances under which a person must act urgently is important to prevent hindsight bias by the trier of fact . A reasonable person may not always act as they would when more relaxed. It is fair that actions be judged in light of any exigent conditions that could have affected how the defendant acted. [ 46 ] [ 47 ]
People must make do with what they have or can get. Such circumstances are relevant to any determination of whether the defendant acted reasonably. Where resources are scarce, some actions may be reasonable that would not be were there plenty.
Because a reasonable person is objectively presumed to know the law, noncompliance with a local safety statute may also constitute negligence. The related doctrine of negligence per se addresses the circumstances under which the law of negligence can become an implied cause of action for breaching a statutory standard of care. Conversely, minimal compliance with a safety statute does not always absolve a defendant if the trier of fact determines that a reasonable person would have taken actions beyond and in excess of what the statute requires. [ 48 ] The trier of fact can deem the defendant's duty of care met by finding that the statute's standard itself is reasonable and the defendant acted in accordance with what it statute contemplated. [ 49 ] [ 50 ] [ 51 ]
For common law contracts, disputes over contract formation are subjected to what is known as the objective test of assent in order to determine whether a contract exists. This standard is also known as the officious bystander , reasonable bystander , reasonable third party , or reasonable person in the position of the party . [ 52 ] This is in contrast to the subjective test employed in most civil law jurisdictions. The test stems from attempts to balance the competing interests of the judicial policies of assent and of reliability. The former holds that no person ought to be contractually obligated if they did not consent to such an agreement; the latter holds that if no person can rely on actions or words demonstrating consent, then the whole system of commercial exchange will ultimately collapse. [ 53 ]
Prior to the 19th century, courts used a test of subjective evaluation; [ 53 ] that is, the trier of fact determined each party's understanding. [ 54 ] If both parties were of the same mind and understanding on matters, then assent was manifested and the contract was valid. Between the 19th and 20th centuries, the courts shifted toward the objectivist test, reasoning that subjective testimony was often unreliable and self-serving. [ 53 ]
From those opposite principles, modern law has found its way to a rough middle ground, though it still shows a strong bias toward the objective test. [ 52 ] Promises and agreements are reached through manifestations of consent, and parties are liable for actions that deliberately manifest such consent; however, evidence of either party's state of mind can be used to determine the context of the manifestation if the evidence is reliable and compatible with the manifestation in question, though such evidence is typically given very little weight. [ 54 ]
Another circumstance where the reasonable bystander test is used occurs when one party has inadvertently misstated the terms of the contract, and the other party sues to enforce those terms: if it would have been clear to a reasonable bystander that a mistake had been made, then the contract is voidable by the party who made the error; otherwise, the contract is binding.
A variant of the reasonable person can be found in sexual harassment law as the reasonable woman standard. The variation recognizes a difference between men and women regarding the effect of unwanted interaction with a sexual tone. As women have historically been more vulnerable to rape and sex-related violence than have men, some courts believe that the proper perspective for evaluating a claim of sexual harassment is that of the reasonable woman. Notably, Justice Antonin Scalia held that women did not have constitutional protection from discrimination under the fourteenth amendment equal protection clause, where by extension of logic, held the "reasonable woman" standard to be of moot value. However, such has not been the majority opinion of the court. [ 55 ]
Though the use of the reasonable woman standard has gained traction in some areas of the law, the standard has not escaped the crosshairs of humorists. In 1924, legal humorist A. P. Herbert considered the concept of the reasonable man at length in the fictional case of "Fardell v. Potts." In Herbert's fictional account, the judge addressed the lack of a reasonable woman standard in the common law, and ultimately concluded that "a reasonable woman does not exist." [ 56 ]
The concept of l'homme moyen sensuel does not speak of a reasonable person's ability, actions, or understandings. Rather it refers to the response of a reasonable person when presented with some form of information either by image or sound, or upon reading a book or magazine. A well-known application of the concept is Judge John M. Woolsey's lifting of the ban on the book Ulysses by James Joyce . [ 57 ] That ruling contemplated the effect the book would have upon a reasonable person of reasonable sensibility. Similarly, when the publisher of Howl and Other Poems was charged in California with publishing an obscene book, the concept of l'homme moyen sensuel influenced the court's finding of innocence. [ 58 ] It was nearly two decades after Woolsey that the US Supreme Court set down the standard by which materials, when viewed by l'homme moyen sensuel , were judged either obscene or not. [ 59 ] Generally, it has been l'homme moyen sensuel that has dictated what is and is not obscene or pornographic in books, movies, pictures, and now the Internet for at least the past 100 years.
Very often, for instance, in the case of noise ordinances , the enforcement of the law is only for the purpose of protecting the right of a "reasonable person of normal sensitivity". [ 60 ] [ 61 ] [ 62 ] | https://en.wikipedia.org/wiki/Reasonable_person |
Reasoning language models ( RLMs ) are large language models that have been further trained to solve multi-step reasoning tasks. [ 1 ] These models perform better on logical, mathematical or programmatic tasks than traditional autoregressive LLMs, have the ability to backtrack , and employ test-time compute as an additional scaling axis beyond training examples , parameter count, and train-time compute.
o1-preview, an LLM with enhanced reasoning, was released in September 2024. [ 2 ] The full version, o1 , followed in December 2024. OpenAI also began sharing results on its successor, o3 . [ 3 ]
The development of reasoning LLMs has illustrated what Rich Sutton termed the "bitter lesson": that general methods leveraging computation often outperform those relying on specific human insights. [ 4 ] For instance, some research groups, such as the Generative AI Research Lab (GAIR), initially explored complex techniques like tree search and reinforcement learning in attempts to replicate o1's capabilities. However, they found, as documented in their "o1 Replication Journey" papers, that knowledge distillation — training a smaller model to mimic o1's outputs – was surprisingly effective. This highlighted the power of distillation in this context.
Alibaba also released reasoning versions of its Qwen LLMs in November 2024.
In December 2024, Google introduced Deep Research in Gemini , [ 5 ] a feature in Gemini that conducts multi-step research tasks.
On December 16, 2024, an experiment using a Llama 3B model demonstrated that by scaling test-time compute, a relatively small model could outperform a much larger Llama 70B model on challenging reasoning tasks. This result highlighted that improved inference strategies can unlock latent reasoning capabilities even in compact models. [ 6 ]
In January 2025, DeepSeek released R1, a model competitive with o1 at lower cost, highlighting the effectiveness of GRPO. [ 7 ] On January 25, 2025, DeepSeek launched a feature in their DeepSeek R1 model, enabling the simultaneous use of search and reasoning capabilities, which allows for more efficient integration of data retrieval with reflective reasoning processes. OpenAI subsequently released o3-mini, followed by Deep Research which is based on o3 . [ 8 ] The power of distillation was further demonstrated by s1-32B, achieving strong performance with budget forcing and scaling techniques. [ 9 ]
On February 2, 2025, OpenAI released deep research, [ 10 ] a tool that integrates reasoning and web search in a unified workflow, allowing users to perform complex research tasks that require multi-step reasoning and data synthesis from multiple sources. It is based on o3 and can take from 5 to 30 minutes to generate comprehensive reports. [ 11 ]
A large language model (LLM) can be finetuned on a dataset of reasoning tasks with example solutions and reasoning traces. The fine-tuned model can then produce its own reasoning traces for new problems. [ 12 ] [ 13 ]
As it is expensive to get humans to write reasoning traces for a SFT dataset, researchers have proposed ways to automatically construct SFT datasets. In rejection sampling finetuning (RFT), new reasoning traces are collected via a loop: [ 14 ]
A pretrained language model can be further trained by RL. In the RL formalism, a generative language model is a policy π {\displaystyle \pi } . A prompt specifying a task to solve is an environmental state x {\displaystyle x} , and the response of the language model to the prompt is an action y {\displaystyle y} . The probability that the language model responds x {\displaystyle x} with y {\displaystyle y} is π ( y | x ) {\displaystyle \pi (y|x)} .
Training a reasoning language model by RL then consists of constructing a reward model r ( x , y ) {\displaystyle r(x,y)} to guide the RL process. Intuitively, a reward model describes how desirable/appropriate/good the response is for the prompt. For reasoning language model, the prompt describes a reasoning task, and the reward would be high if the response solves the task, and low if the response fails to solve the task.
For reasoning language models, the model's response y {\displaystyle y} may be broken down into multiple steps, in which case it is written as y 1 , y 2 , … , y n {\displaystyle y_{1},y_{2},\dots ,y_{n}} .
Most recent systems use policy-gradient methods such as Proximal Policy Optimization (PPO) because PPO constrains each policy update with a clipped objective, which stabilises training for very large policies. [ 15 ]
Outcome reward model, or outcome-supervised RM (ORM), [ 12 ] is a reward model that computes the reward of a step r ( x , y 1 , … , y i ) {\displaystyle r(x,y_{1},\dots ,y_{i})} determined by the final answer: r ( x , y 1 , … , y i ) = r ( x , y n ) {\displaystyle r(x,y_{1},\dots ,y_{i})=r(x,y_{n})} . They are also called "verifiers".
For tasks with an answer that is easy to verify, such as word problems in math , the outcome reward can simply be binary: 1 if the final answer is correct, and 0 otherwise. [ 12 ] If the answer is not easy to verify programmatically, humans can manually label the answers as correct or not, then the labels can be used to finetune a base model that predicts the human label. [ 13 ] For other kinds of tasks, such as creative writing, where task performance is not binary true/false, one can train a reward model by finetuning a base model on human ranked preference data, such as used in reinforcement learning from human feedback . [ 16 ] A base model can also be finetuned to predict, given a partial thinking trace x , y 1 , … , y m {\displaystyle x,y_{1},\dots ,y_{m}} , whether the final answer would be correct or not. This can then be used as a binary reward signal. [ 12 ]
The ORM is usually trained via logistic regression , i.e. minimizing cross-entropy loss . [ 17 ]
Given a PRM, an ORM can be constructed by multiplying the total process reward during the reasoning trace, [ 16 ] or by taking the minimum, [ 17 ] or some other method to aggregate the process rewards. DeepSeek used a simple ORM for training the R1 model . [ 18 ]
Process reward model, or process-supervised RM (PRM), [ 12 ] is a reward model that computes the reward of a step r ( x , y 1 , … , y i ) {\displaystyle r(x,y_{1},\dots ,y_{i})} determined by the steps so far: ( x , y 1 , … , y i ) {\displaystyle (x,y_{1},\dots ,y_{i})} .
Given a partial thinking trace x , y 1 , … , y m {\displaystyle x,y_{1},\dots ,y_{m}} , a human can be queried as to whether the steps so far are correct, regardless of whether the ultimate answer would be correct. This can then be used as a binary reward signal. As human labels are expensive, a base model can then be finetuned to predict the human labels. [ 12 ] The PRM is usually trained by logistic regression on the human labels, i.e. by minimizing the cross-entropy loss between the true labels and the predicted labels. [ 17 ]
As an example, in a 2023 OpenAI paper, 800K process labels were collected for 75K solution traces. A labeler would be presented with a solution trace, and keep labelling "positive" if the step progresses towards the solution, "neutral" if it is not wrong, but does not progress towards solution, and "negative" if it is a mistake. As soon as a "negative" label is entered, the labeler stops labeling that thinking trace, and begins labeling another one. The idea was that, while labelling subsequent reasoning steps can provide even richer supervision signals, simply labeling up to the first error was sufficient for training a competent PRM. [ 16 ] [ 19 ]
As human labels are expensive, researchers have proposed methods to create PRM without human labels on the processes. Inspired by Monte Carlo tree search (MCTS), the Math-Shepherd method samples multiple continuations until the end, starting at each reasoning step y i {\displaystyle y_{i}} , and set the reward at that step to be either # (correct answers) # (total answers) {\displaystyle {\frac {\#{\text{(correct answers)}}}{\#{\text{(total answers)}}}}} in the case of "soft estimation", or { 1 if one of the answers is correct 0 else {\displaystyle {\begin{cases}1&{\text{if one of the answers is correct}}\\0&{\text{else}}\end{cases}}} in the case of "hard estimation". This creates process reward using only an ORM, which is usually easier or cheaper to construct. After creating these process reward labels, a PRM can be trained on them. [ 17 ] Some have tried a fully MCTS approach. [ 20 ]
One can also use an ORM to implicitly construct a PRM, similar to direct preference optimization . [ 21 ]
A trained ORM can be used to select the best response. The policy would rollout multiple responses, and a trained ORM would select the best response. This allows a simple form of test time compute scaling ("best-of-N"). [ 13 ] [ 22 ]
A trained PRM can also be used to guide reasoning by greedy tree search . That is, the policy model generates several possible next reasoning steps, and the PRM selects the best one, and the process repeats. This is similar to how a trained ORM can be used to select the best response. [ 23 ] Beam search perform better than greedy search.
Lookahead search is another tree search method, where the policy model generates several possible next reasoning steps, then make a (partial) rollout for each. If a solution endpoint is reached during the forward simulation, the process halts early. Otherwise, the PRM is used to calculate the total reward for each rollout. The step with the highest rollout is selected. [ 24 ]
Self-consistency can be combined with an ORM. The model would be used to generate multiple answers, and the answers would be clustered, so that each cluster has the same answer. The ORM is used to compute the reward for each answer, and the rewards within each cluster is summed. The answer corresponding to the cluster with the highest summed reward is output. [ 17 ]
Reasoning models generally outperform non-reasoning models in most benchmarks, especially on tasks requiring multi-step reasoning.
However, some benchmarks exclude reflective models due to longer response times.
The HLE , a rigorous benchmark designed to assess expert-level reasoning across mathematics, humanities, and the natural sciences, reveals substantial performance gaps among models. State-of-the-art reasoning models have demonstrated low accuracy on HLE, highlighting significant room for improvement. In particular, the full reasoning model o3 achieved an accuracy of 26.6%, [ 25 ] while its lighter counterpart, o3‑mini-high (evaluated on text‑only questions), reached 13%. [ 26 ]
The American Invitational Mathematics Examination (AIME) benchmark, a challenging mathematics competition, demonstrates significant performance differences between model types. Non-reasoning models typically solve less than 30% of AIME. In contrast, models employing reasoning techniques score between 50% and 80%. [ 27 ] While OpenAI's o1 maintained or slightly improved its accuracy from reported 2024 [ citation needed ] metrics to 2025 AIME results, o3-mini (high) achieved a higher accuracy (80%) at a significantly lower cost (approximately 12 times cheaper).
According to OpenAI's January 2025 report on o3-mini, adjustable "reasoning effort" significantly affects performance, particularly in STEM . Increasing reasoning effort from low to high boosts accuracy on benchmarks like AIME 2024, GPQA Diamond, and Codeforces , providing performance gains typically in the range of 10-30%. With high reasoning effort, o3-mini (high) achieved 87.3% in AIME (different from the MathArena AIME benchmark results), 79.7% in GPQA Diamond, 2130 Elo in Codeforces, and 49.3 in SWE-bench Verified. [ 28 ]
Reasoning models require significantly more test-time compute than non-reasoning models. On the AIME benchmark, reasoning models were 10 to 74 times more expensive [ 16 ] than non-reasoning counterparts.
Reflective reasoning increases response times, with current models taking anywhere from three seconds to several minutes to generate an answer. As reasoning depth improves, future models may require even longer processing times. | https://en.wikipedia.org/wiki/Reasoning_language_model |
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction . Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems .
By the everyday usage definition of the phrase, all computer systems are reasoning systems in that they all automate some type of logic or decision. In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning. For example, not for systems that do fairly straightforward types of reasoning such as calculating a sales tax or customer discount but making logical inferences about a medical diagnosis or mathematical theorem. Reasoning systems come in two modes: interactive and batch processing. Interactive systems interface with the user to ask clarifying questions or otherwise allow the user to guide the reasoning process. Batch systems take in all the available information at once and generate the best answer possible without user feedback or guidance. [ 1 ]
Reasoning systems have a wide field of application that includes scheduling , business rule processing , problem solving , complex event processing , intrusion detection , predictive analytics , robotics , computer vision , and natural language processing .
The first reasoning systems were theorem provers, systems that represent axioms and statements in First Order Logic and then use rules of logic such as modus ponens to infer new statements. Another early type of reasoning system were general problem solvers. These were systems such as the General Problem Solver designed by Newell and Simon . General problem solvers attempted to provide a generic planning engine that could represent and solve structured problems. They worked by decomposing problems into smaller more manageable sub-problems, solving each sub-problem and assembling the partial answers into one final answer. Another example general problem solver was the SOAR family of systems.
In practice these theorem provers and general problem solvers were seldom useful for practical applications and required specialized users with knowledge of logic to utilize. The first practical application of automated reasoning were expert systems . Expert systems focused on much more well defined domains than general problem solving such as medical diagnosis or analyzing faults in an aircraft. Expert systems also focused on more limited implementations of logic. Rather than attempting to implement the full range of logical expressions they typically focused on modus-ponens implemented via IF-THEN rules. Focusing on a specific domain and allowing only a restricted subset of logic improved the performance of such systems so that they were practical for use in the real world and not merely as research demonstrations as most previous automated reasoning systems had been. The engine used for automated reasoning in expert systems were typically called inference engines . Those used for more general logical inferencing are typically called theorem provers . [ 2 ]
With the rise in popularity of expert systems many new types of automated reasoning were applied to diverse problems in government and industry. Some such as case-based reasoning were off shoots of expert systems research. Others such as constraint satisfaction algorithms were also influenced by fields such as decision technology and linear programming. Also, a completely different approach, one not based on symbolic reasoning but on a connectionist model has also been extremely productive. This latter type of automated reasoning is especially well suited to pattern matching and signal detection types of problems such as text searching and face matching.
The term reasoning system can be used to apply to just about any kind of sophisticated decision support system as illustrated by the specific areas described below. However, the most common use of the term reasoning system implies the computer representation of logic. Various implementations demonstrate significant variation in terms of systems of logic and formality. Most reasoning systems implement variations of propositional and symbolic ( predicate ) logic. These variations may be mathematically precise representations of formal logic systems (e.g., FOL ), or extended and hybrid versions of those systems (e.g., Courteous logic [ 3 ] ). Reasoning systems may explicitly implement additional logic types (e.g., modal , deontic , temporal logics). However, many reasoning systems implement imprecise and semi-formal approximations to recognised logic systems. These systems typically support a variety of procedural and semi- declarative techniques in order to model different reasoning strategies. They emphasise pragmatism over formality and may depend on custom extensions and attachments in order to solve real-world problems.
Many reasoning systems employ deductive reasoning to draw inferences from available knowledge. These inference engines support forward reasoning or backward reasoning to infer conclusions via modus ponens . The recursive reasoning methods they employ are termed ' forward chaining ' and ' backward chaining ', respectively. Although reasoning systems widely support deductive inference, some systems employ abductive , inductive , defeasible and other types of reasoning. Heuristics may also be employed to determine acceptable solutions to intractable problems .
Reasoning systems may employ the closed world assumption (CWA) or open world assumption (OWA). The OWA is often associated with ontological knowledge representation and the Semantic Web . Different systems exhibit a variety of approaches to negation . As well as logical or bitwise complement , systems may support existential forms of strong and weak negation including negation-as-failure and 'inflationary' negation (negation of non- ground atoms ). Different reasoning systems may support monotonic or non-monotonic reasoning, stratification and other logical techniques.
Many reasoning systems provide capabilities for reasoning under uncertainty . This is important when building situated reasoning agents which must deal with uncertain representations of the world. There are several common approaches to handling uncertainty. These include the use of certainty factors, probabilistic methods such as Bayesian inference or Dempster–Shafer theory , multi-valued (' fuzzy ') logic and various connectionist approaches. [ 4 ]
This section provides a non-exhaustive and informal categorisation of common types of reasoning system. These categories are not absolute. They overlap to a significant degree and share a number of techniques, methods and algorithms .
Constraint solvers solve constraint satisfaction problems (CSPs). They support constraint programming . A constraint is a which must be met by any valid solution to a problem . Constraints are defined declaratively and applied to variables within given domains. Constraint solvers use search , backtracking and constraint propagation techniques to find solutions and determine optimal solutions. They may employ forms of linear and nonlinear programming . They are often used to perform optimization within highly combinatorial problem spaces. For example, they may be used to calculate optimal scheduling, design efficient integrated circuits or maximise productivity in a manufacturing process. [ 5 ]
Theorem provers use automated reasoning techniques to determine proofs of mathematical theorems. They may also be used to verify existing proofs. In addition to academic use, typical applications of theorem provers include verification of the correctness of integrated circuits, software programs, engineering designs, etc.
Logic programs (LPs) are software programs written using programming languages whose primitives and expressions provide direct representations of constructs drawn from mathematical logic. An example of a general-purpose logic programming language is Prolog . LPs represent the direct application of logic programming to solve problems. Logic programming is characterised by highly declarative approaches based on formal logic, and has wide application across many disciplines.
Rule engines represent conditional logic as discrete rules. Rule sets can be managed and applied separately to other functionality. They have wide applicability across many domains. Many rule engines implement reasoning capabilities. A common approach is to implement production systems to support forward or backward chaining. Each rule ('production') binds a conjunction of predicate clauses to a list of executable actions.
At run-time, the rule engine matches productions against facts and executes ('fires') the associated action list for each match. If those actions remove or modify any facts, or assert new facts, the engine immediately re-computes the set of matches. Rule engines are widely used to model and apply business rules , to control decision-making in automated processes and to enforce business and technical policies.
Deductive classifiers arose slightly later than rule-based systems and were a component of a new type of artificial intelligence knowledge representation tool known as frame languages . A frame language describes the problem domain as a set of classes, subclasses, and relations among the classes. It is similar to the object-oriented model. Unlike object-oriented models however, frame languages have a formal semantics based on first order logic.
They utilize this semantics to provide input to the deductive classifier. The classifier in turn can analyze a given model (known as an ontology ) and determine if the various relations described in the model are consistent. If the ontology is not consistent the classifier will highlight the declarations that are inconsistent. If the ontology is consistent the classifier can then do further reasoning and draw additional conclusions about the relations of the objects in the ontology.
For example, it may determine that an object is actually a subclass or instance of additional classes as those described by the user. Classifiers are an important technology in analyzing the ontologies used to describe models in the Semantic web . [ 6 ] [ 7 ]
Machine learning systems evolve their behavior over time based on experience . This may involve reasoning over observed events or example data provided for training purposes. For example, machine learning systems may use inductive reasoning to generate hypotheses for observed facts. Learning systems search for generalised rules or functions that yield results in line with observations and then use these generalisations to control future behavior.
Case-based reasoning (CBR) systems provide solutions to problems by analysing similarities to other problems for which known solutions already exist. Case-based reasoning uses the top (superficial) levels of similarity; namely, the object, feature, and value criteria. This differs case-based reasoning from analogical reasoning in that analogical reasoning uses only the "deep" similarity criterion i.e. relationship or even relationships of relationships, and need not find similarity on the shallower levels. This difference makes case-based reasoning applicable only among cases of the same domain because similar objects, features, and/or values must be in the same domain, while the "deep" similarity criterion of "relationships" makes analogical reasoning applicable cross-domains where only the relationships ae similar between the cases. CBR systems are commonly used in customer/ technical support and call centre scenarios and have applications in industrial manufacture , agriculture , medicine , law and many other areas.
A procedural reasoning system (PRS) uses reasoning techniques to select plans from a procedural knowledge base. Each plan represents a course of action for achievement of a given goal . The PRS implements a belief–desire–intention model by reasoning over facts (' beliefs ') to select appropriate plans (' intentions ') for given goals ('desires'). Typical applications of PRS include management, monitoring and fault detection systems. | https://en.wikipedia.org/wiki/Reasoning_system |
The method of reassignment is a technique for sharpening a time-frequency representation (e.g. spectrogram or the short-time Fourier transform ) by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. The method has been independently introduced by several parties under various names, including method of reassignment , remapping , time-frequency reassignment , and modified moving-window method . [ 1 ] The method of reassignment sharpens blurry time-frequency data by relocating the data according to local estimates of instantaneous frequency and group delay. This mapping to reassigned time-frequency coordinates is very precise for signals that are separable in time and frequency with respect to the analysis window.
Many signals of interest have a distribution of energy that varies in time and frequency. For example, any sound signal having a beginning or an end has an energy distribution that varies in time, and most sounds exhibit considerable variation in both time and frequency over their duration. Time-frequency representations are commonly used to analyze or characterize such signals. They map the one-dimensional time-domain signal into a two-dimensional function of time and frequency. A time-frequency representation describes the variation of spectral energy distribution over time, much as a musical score describes the variation of musical pitch over time.
In audio signal analysis, the spectrogram is the most commonly used time-frequency representation, probably because it is well understood, and immune to so-called "cross-terms" that sometimes make other time-frequency representations difficult to interpret. But the windowing operation required in spectrogram computation introduces an unsavory tradeoff between time resolution and frequency resolution, so spectrograms provide a time-frequency representation that is blurred in time, in frequency, or in both dimensions. The method of time-frequency reassignment is a technique for refocussing time-frequency data in a blurred representation like the spectrogram by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. [ 2 ]
One of the best-known time-frequency representations is the spectrogram, defined as the squared magnitude of the short-time Fourier transform. Though the short-time phase spectrum is known to contain important temporal information about the signal, this information is difficult to interpret, so typically, only the short-time magnitude spectrum is considered in short-time spectral analysis. [ 2 ]
As a time-frequency representation, the spectrogram has relatively poor resolution. Time and frequency resolution are governed by the choice of analysis window and greater concentration in one domain is accompanied by greater smearing in the other. [ 2 ]
A time-frequency representation having improved resolution, relative to the spectrogram, is the Wigner–Ville distribution , which may be interpreted as a short-time Fourier transform with a window function that is perfectly matched to the signal. The Wigner–Ville distribution is highly concentrated in time and frequency, but it is also highly nonlinear and non-local. Consequently, this
distribution is very sensitive to noise, and generates cross-components that often mask the components of interest, making it difficult to extract useful information concerning the distribution of energy in multi-component signals. [ 2 ]
Cohen's class of bilinear time-frequency representations is a class of "smoothed" Wigner–Ville distributions, employing a smoothing kernel that can reduce sensitivity of the distribution to noise and suppresses cross-components, at the expense of smearing the distribution in time and frequency. This smearing causes the distribution to be non-zero in regions where the true Wigner–Ville distribution shows no energy. [ 2 ]
The spectrogram is a member of Cohen's class. It is a smoothed Wigner–Ville distribution with the smoothing kernel equal to the Wigner–Ville distribution of the analysis window. The method of reassignment smooths the Wigner–Ville distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has been shown to reduce time and frequency smearing of any member of Cohen's class. [ 2 ] [ 3 ] In the case of the reassigned
spectrogram, the short-time phase spectrum is used to
correct the nominal time and frequency coordinates of the
spectral data, and map it back nearer to the true regions of
support of the analyzed signal.
Pioneering work on the method of reassignment was published by Kodera, Gendrin, and de Villedary under the name of Modified Moving Window Method . [ 4 ] Their technique enhances the resolution in time and frequency of the classical Moving Window Method (equivalent to the spectrogram) by assigning to each data point a new time-frequency coordinate that better-reflects the distribution of energy in the analyzed signal. [ 4 ] : 67
In the classical moving window method, a time-domain signal, x ( t ) {\displaystyle x(t)} is decomposed into a set of coefficients, ϵ ( t , ω ) {\displaystyle \epsilon (t,\omega )} , based on a set of elementary signals, h ω ( t ) {\displaystyle h_{\omega }(t)} , defined [ 4 ] : 73
where h ( t ) {\displaystyle h(t)} is a (real-valued) lowpass kernel function, like the window function in the short-time Fourier transform. The coefficients in this decomposition are defined
where M t ( ω ) {\displaystyle M_{t}(\omega )} is the magnitude, and ϕ τ ( ω ) {\displaystyle \phi _{\tau }(\omega )} the phase, of X t ( ω ) {\displaystyle X_{t}(\omega )} , the Fourier transform of the signal x ( t ) {\displaystyle x(t)} shifted in time by t {\displaystyle t} and windowed by h ( t ) {\displaystyle h(t)} . [ 5 ] : 4
x ( t ) {\displaystyle x(t)} can be reconstructed from the moving window coefficients by [ 5 ] : 8
For signals having magnitude spectra, M ( t , ω ) {\displaystyle M(t,\omega )} , whose time variation is slow relative to the phase variation, the maximum contribution to the reconstruction integral comes from the vicinity of the point t , ω {\displaystyle t,\omega } satisfying the phase stationarity condition [ 4 ] : 74
or equivalently, around the point t ^ , ω ^ {\displaystyle {\hat {t}},{\hat {\omega }}} defined by [ 4 ] : 74
This phenomenon is known in such fields as optics as the principle of stationary phase , which states that for periodic or quasi-periodic signals, the variation of the Fourier phase spectrum not attributable to periodic oscillation is slow with respect to time in the vicinity of the frequency of oscillation, and in surrounding regions the variation is relatively rapid. Analogously, for impulsive signals, that are concentrated in time, the variation of the phase spectrum is slow with respect to frequency near the time of the impulse, and in surrounding regions the variation is relatively rapid. [ 4 ] : 73
In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference, in frequency regions of rapid phase variation. Only regions of slow phase variation (stationary phase) will contribute significantly to the reconstruction, and the maximum contribution (center of gravity) occurs at the point where the phase is changing most slowly with respect to time and frequency. [ 4 ] : 71
The time-frequency coordinates thus computed are equal to the local group delay, t ^ g ( t , ω ) , {\displaystyle {\hat {t}}_{g}(t,\omega ),} and local instantaneous frequency, ω ^ i ( t , ω ) , {\displaystyle {\hat {\omega }}_{i}(t,\omega ),} and are computed from the phase of the short-time Fourier transform, which is normally ignored when constructing the spectrogram. These quantities are local in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis. [ 4 ] : 70
The modified moving window method, or method of reassignment, changes (reassigns) the point of attribution of ϵ ( t , ω ) {\displaystyle \epsilon (t,\omega )} to this point of maximum contribution t ^ ( t , ω ) , ω ^ ( t , ω ) {\displaystyle {\hat {t}}(t,\omega ),{\hat {\omega }}(t,\omega )} , rather than to the point t , ω {\displaystyle t,\omega } at which it is computed. This point is sometimes called the center of gravity of the distribution, by way of analogy to a mass distribution. This analogy is a useful reminder that the attribution of spectral energy to the center of gravity of its distribution only makes sense when there is energy to attribute, so the method of reassignment has no meaning at points where the spectrogram is zero-valued. [ 2 ]
In digital signal processing, it is most common to sample the time and frequency domains. The discrete Fourier transform is used to compute samples X ( k ) {\displaystyle X(k)} of the Fourier transform from samples x ( n ) {\displaystyle x(n)} of a time domain signal. The reassignment operations proposed by Kodera et al. cannot be applied directly to the discrete short-time Fourier transform data, because partial derivatives cannot be computed directly on data that is discrete in time and frequency, and it has been suggested that this difficulty has been the primary barrier to wider use of the method of reassignment.
It is possible to approximate the partial derivatives using finite differences. For example, the phase spectrum can be evaluated at two nearby times, and the partial derivative with respect to time be approximated as the difference between the two values divided by the time difference, as in
For sufficiently small values of Δ t {\displaystyle \Delta t} and Δ ω , {\displaystyle \Delta \omega ,} and provided that the phase difference is appropriately "unwrapped", this finite-difference method yields good approximations to the partial derivatives of phase, because in regions of the spectrum in which the evolution of the phase is dominated by rotation due to sinusoidal oscillation of a single, nearby component, the phase is a linear function.
Independently of Kodera et al. , Nelson arrived at a similar method for improving the time-frequency precision of short-time spectral data from partial derivatives of the short-time phase
spectrum. [ 6 ] It is easily shown that Nelson's cross spectral surfaces compute an approximation of the derivatives that is equivalent to the finite differences method.
Auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al., could be extended to any member of Cohen's class of time-frequency representations by generalizing the reassignment operations to
where W x ( t , ω ) {\displaystyle W_{x}(t,\omega )} is the Wigner–Ville distribution of x ( t ) {\displaystyle x(t)} , and Φ ( t , ω ) {\displaystyle \Phi (t,\omega )} is the kernel function that defines the distribution. They further described an efficient method for computing the times and frequencies for the reassigned spectrogram efficiently and accurately without explicitly computing the partial derivatives of
phase. [ 2 ]
In the case of the spectrogram, the reassignment operations can be computed by
where X ( t , ω ) {\displaystyle X(t,\omega )} is the short-time Fourier transform computed using an analysis window h ( t ) , X T h ( t , ω ) {\displaystyle h(t),X_{{\mathcal {T}}h}(t,\omega )} is the short-time Fourier transform computed using a time-weighted analysis window h T ( t ) = t ⋅ h ( t ) {\displaystyle h_{\mathcal {T}}(t)=t\cdot h(t)} and X D h ( t , ω ) {\displaystyle X_{{\mathcal {D}}h}(t,\omega )} is the short-time Fourier transform computed using a time-derivative analysis window h D ( t ) = d d t h ( t ) {\displaystyle h_{\mathcal {D}}(t)={\tfrac {d}{dt}}h(t)} .
Using the auxiliary window functions h T ( t ) {\displaystyle h_{\mathcal {T}}(t)} and h D ( t ) {\displaystyle h_{\mathcal {D}}(t)} , the reassignment operations can be computed at any time-frequency coordinate t , ω {\displaystyle t,\omega } from an algebraic combination of three Fourier transforms evaluated at t , ω {\displaystyle t,\omega } . Since these algorithms operate only on short-time spectral data evaluated at a single time and frequency, and do not explicitly compute any derivatives, this gives an efficient method of computing the reassigned discrete short-time Fourier transform.
One constraint in this method of computation is that the | X ( t , ω ) | 2 {\displaystyle |X(t,\omega )|^{2}} must be non-zero. This is not much of a restriction, since the reassignment operation itself implies that there is some energy to reassign, and has no meaning when the distribution is zero-valued.
The short-time Fourier transform can often be used to estimate the amplitudes and phases of the individual components in a multi-component signal, such as a quasi-harmonic musical instrument tone. Moreover, the time and frequency reassignment operations can be used to sharpen the representation by attributing the spectral energy reported by the short-time Fourier transform to the point that is the local center of gravity of the complex energy distribution. [ 7 ]
For a signal consisting of a single component, the instantaneous frequency can be estimated from the partial derivatives of phase of any short-time Fourier transform channel that passes the component. If the signal is to be decomposed into many components,
and the instantaneous frequency of each component is defined as the derivative of its phase with respect to time, that is,
then the instantaneous frequency of each individual component can be computed from the phase of the response of a filter that passes that component, provided that no more than one component lies in the passband of the filter.
This is the property, in the frequency domain, that Nelson called separability [ 6 ] and is required of all signals so analyzed. If this property is not met, then the desired multi-component decomposition cannot be achieved, because the parameters of individual components cannot be estimated from the short-time Fourier transform. In such cases, a different analysis window must be chosen so that the separability criterion is satisfied.
If the components of a signal are separable in frequency with respect to a particular short-time spectral analysis window, then the output of each short-time Fourier transform filter is a filtered version of, at most, a single dominant (having significant energy) component, and so the derivative, with respect to time, of the phase of the X ( t , ω 0 ) {\displaystyle X(t,\omega _{0})} is equal to the derivative with respect to time, of the phase of the dominant component at ω 0 . {\displaystyle \omega _{0}.} Therefore, if a component, x n ( t ) , {\displaystyle x_{n}(t),} having instantaneous frequency ω n ( t ) {\displaystyle \omega _{n}(t)} is the dominant component in the vicinity of ω 0 , {\displaystyle \omega _{0},} then the instantaneous frequency of that component can be computed from the phase of the short-time Fourier transform evaluated at ω 0 . {\displaystyle \omega _{0}.} That is,
Just as each bandpass filter in the short-time Fourier transform filterbank may pass at most a single complex exponential component, two temporal events must be sufficiently separated in time that they do not lie in the same windowed segment of the input signal. This is the property of separability in the time domain, and is equivalent to requiring that the time between two events be
greater than the length of the impulse response of the short-time Fourier transform filters, the span of non-zero samples in h ( t ) . {\displaystyle h(t).}
In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window). [ 6 ] : 2585
Gardner and Magnasco (2006) argues that the auditory nerves may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement. [ 8 ]
[ 9 ] | https://en.wikipedia.org/wiki/Reassignment_method |
Reassortment is the mixing of the genetic material of a species into new combinations in different individuals. The product of reassortment is called a reassortant . It is particularly used when two similar viruses that are infecting the same cell exchange genetic material. More specifically, it refers to the swapping of entire segments of the genome, which only occurs between viruses with segmented genomes. [ 1 ] (All known viruses with segmented genomes are RNA viruses.)
The classical example of reassortment is seen in the influenza viruses , whose genomes consist of eight distinct segments of RNA. These segments act like mini-chromosomes, and each time a flu virus is assembled, it requires one copy of each segment.
If a single host (a human, a chicken, or other animal) is infected by two different strains of the influenza virus, then it is possible that new assembled viral particles will be created from segments whose origin is mixed, some coming from one strain and some coming from another. The new reassortant strain will share properties of both of its parental lineages.
Reassortment is responsible for some of the major antigenic shifts in the history of the influenza virus. In the 1957 " Asian flu " and 1968 " Hong Kong flu " pandemics , flu strains were caused by reassortment between an avian virus and a human virus. [ 2 ] [ 3 ] In addition, the H1N1 virus responsible for the 2009 swine flu pandemic has an unusual mix of swine, avian and human influenza genetic sequences. [ 4 ]
When influenza viruses are inactivated by UV irradiation or ionizing radiation , they remain capable of multiplicity reactivation in infected host cells. [ 5 ] [ 6 ] [ 7 ] If any of a virus's genome segments is damaged in such a way as to prevent replication or expression of an essential gene , the virus is inviable when it, alone, infects a host cell (single infection). However, when two or more damaged viruses infect the same cell (multiple infection), the infection can often succeed (multiplicity reactivation) due to reassortment of segments, provided that each of the eight genome segments is present in at least one undamaged copy. [ 8 ]
The reptarenavirus family, responsible for inclusion body disease in snakes, shows a very high degree of genetic diversity due to reassortment of genetic material from multiple strains in the same infected animal. | https://en.wikipedia.org/wiki/Reassortment |
Reaxys is a web-based tool for the retrieval of information about chemical compounds and data from published literature, including journals and patents. The information includes chemical compounds, chemical reactions, chemical properties, related bibliographic data, substance data with synthesis planning information, as well as experimental procedures from selected journals and patents. It is licensed by Elsevier . [ 1 ]
Reaxys was launched in 2009 as the successor to the CrossFire databases. It was developed to provide research chemists with access to current and historical, relevant, organic, inorganic and organometallic chemistry information, from reliable sources via an easy-to-use interface. [ 2 ]
One of the primary goals of Reaxys is to provide research chemists with access to experimentally measured data – reactions, physical, chemical or pharmacological – in one universal and factual platform. [ 3 ] Content covers organic, medicinal, synthetic, agro, fine, catalyst, inorganic and process chemistry and provides information on structures, reactions, and citations. Additional features include a synthesis planner and access to commercial availability information. There have been regular releases and enhancements to Reaxys since it was first launched, including similarity searching.
Reaxys provides links to Scopus for all matching articles and interoperability with ScienceDirect . Access to the database is subject to an annual license agreement.
The content covers more than 200 years of chemistry and has been abstracted from several thousands of journal titles, books and patents. [ 4 ] Today the data is drawn from selected journals (400 titles) and chemistry patents, and the excerption process for each reaction or substance data included needs to meet three conditions:
Journals covered include Advanced Synthesis and Catalysis , Journal of American Chemical Society , Journal of Organometallic Chemistry , Synlett and Tetrahedron .
Patents in Reaxys come from the International Patent Classes: [ 2 ]
Only a very limited number of studies compared Reaxys with other databases, that provide chemical search functionality, such as SciFinder , ChEMBL , PubChem and Questel-Orbit . For example, the most comprehensive study published in 2020 by researchers from the University of Sydney concluded, that "Reaxys is definitely the first choice, due to both its wealth of data and its precise search facilities...but for less common data and spectra SciFinder contains often more information than Reaxys. PubChem should also be included, not only because of its size and accessibility... Reaxys has well over 100 times the number of experimental property data points <as SciFinder>... In the case of Reaxys and SciFinder, the natural language query algorithms in Reaxys are displayable, but in
SciFinder the algorithms are proprietary and not available." [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Reaxys |
Reba Mithua Bandyopadhyay (born 1972) [ 1 ] is an American science policy analyst. Formerly a professional astronomer, she works as deputy executive director of the President's Council of Advisors on Science and Technology in the US Office of Science and Technology Policy , [ 2 ] and as legislative and science policy analyst for the National Science Board of the National Science Foundation . [ 3 ]
As an astronomer, Bandyopadhyay specialized in observations of the Galactic Center and of star systems containing neutron stars and black holes . [ 4 ] She has also participated in studies of 2060 Chiron , a Solar System object combining the characteristics of comets and asteroids. [ 5 ]
Bandyopadhyay graduated from the Massachusetts Institute of Technology in 1993. [ 5 ] She completed a D.Phil. in 1998 at the University of Oxford in England, with the dissertation Infrared observations of X-ray binaries supervised by Phil Charles. [ 6 ] After postdoctoral research at the Naval Research Laboratory , she worked for the Gemini Observatory from 2001 to 2004, at the observatory's Oxford office. She then became a research scientist at the University of Florida . [ 7 ]
From 2014 to 2015 Bandyopadhyay was a science advisor in the United States Senate , advising Brian Schatz as an American Physical Society Congressional Fellow, [ 2 ] [ 8 ] and from 2015 to 2017 she worked for the National Science Board as an American Association for the Advancement of Science Science & Technology Policy Executive Branch Fellow, [ 2 ] [ 4 ] before taking her present positions as deputy executive director of the President's Council of Advisors on Science and Technology in the US Office of Science and Technology Policy , [ 2 ] and as legislative and science policy analyst for the National Science Board of the National Science Foundation . [ 3 ]
Bandyopadhyay was elected as a Fellow of the American Association for the Advancement of Science (AAAS) in 2021, in the AAAS Section on Astronomy. [ 9 ] She was elected as a Fellow of the American Physical Society (APS) in 2023, after a nomination from the APS Forum on Physics and Society, "for outstanding contributions to the nation through informing, crafting, and advancing innovative, inclusive, and data-driven science and technology policy". [ 10 ] | https://en.wikipedia.org/wiki/Reba_Bandyopadhyay |
Rebar detailing is the discipline of preparing 'shop/placing' or 'fabrication' drawings or shop drawings of steel reinforcement for construction .
Engineers prepare 'design drawings' that develop required strengths by applying rebar size, spacing, location, anchoring details and lap and/or splicing of steel . The depth of concrete cover is also standard part of rebar detail drawing.
By contrast, 'shop/placing drawings' or 'fabrication drawings' apply the intent of the 'design drawings' for the ironworker . These designs specify the quantity, description, placement, bending shapes with dimensions and laps of the reinforcing steel. Various applications are used to produce bar bending schedules which can be directly fed into CNC machines that cut and bend the rebar to the desired shapes.
The fabrication of the bars is scheduled and the placing/fixing sequence indicated, adding the elements required to support those bars during construction.
'Shop/placing drawings' are submitted to the engineer for review of compliance with design drawings before construction can proceed. These drawings must be detailed using the ACI & CRSI Specifications (United States), ACI & RSIC Specifications (Canada), or BS Specifications (United Kingdom).
Rebar detailing is usually assigned to in-house rebar fabricators or rebar detailing companies. The great majority of rebar detailing companies are stationed in The Middle East and India. The salary of a rebar detailer in the United States ranges from $45,000 to $75,000, [ 1 ] but outsourcing is common due to substantially lower wages overseas.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rebar_detailing |
A rebar spacer is a short, rod-like device used to secure reinforcing steel bars, or rebar , within cast assemblies for reinforced concrete structures. The rebar spacers are fixed before the concrete is poured and remain within the structure.
The main categories of rebar spacers are:
Rebar spacers can be divided into three raw materials categories:
Each of these categories offer advantages which are specific to certain uses.
Plastic spacers (manufactured from polymers ) offer a classic, fast solutions that is easily laid on form-works. The newest solutions focus on non-PVC spacers, which bear an added environmental value.
Concrete spacers (manufactured from fibre reinforced concrete) are used in heavy-weight applications, increased fire safety requirement construction (such as tunnels), and pre-cast concrete systems.
Metal spacers are typically used for keeping distance between more than one layer of rebar reinforcement.
The concrete spacers use the same raw material as the pour, which improves the water tightness and strength of the concrete. Plastic spacers have the advantage of low-cost production and fast processing.
The engineering study of every reinforced concrete construction , whether it is a building, a bridge , a bearing wall , or another structure, dictates the positioning of steel rebars at specific positions in the volume of concrete (predicted concrete cover of steel reinforcement bars). This cover varies between 10 mm and 100 mm.
The statics of every concrete construction is designed in such a way that steel and concrete properties are combined in order to achieve the most possible strength for the particular construction (e.g. anti- earthquake protection) as well as to prevent the long-term corrosion of steel that would weaken the construction.
The function of rebar spacers is to maintain the precise positioning of steel reinforcement, facilitating the implementation of theoretical design specifications in concrete construction. This includes ensuring the appropriate steel cover for specific structural elements (such as in a concrete slab or a beam ) should be generally uniform within the element. [ 1 ]
The use of spacers is particularly important in areas with high earthquake activity in combination with corrosive environments (like proximity to the salt water of the sea), [ 2 ] for example Japan , Iran , Greece , California , etc.
Plastic spacers and bar supports do not bond well with concrete and are not compatible materials. [ 1 ] Plastic has mechanical properties (holds the bar in position) but no structural properties, and is a foreign element within the construction.
When the concrete is poured into the form, a small gap is created between the concrete and the plastic. Plastic has a coefficient of thermal expansion and contraction 10 to 15 times that of concrete. [ 1 ] When subjected to temperature variations, the plastic continues to expand and contract at the higher coefficient.
At elevated temperatures, plastic may melt. Consequently, this leads to a disconnection between the spacers and the concrete that has been cast. Such separation establishes an unobstructed pathway for corrosive substances to access the steel reinforcement from the concrete product's exterior. This process initiates the corrosion of the steel, which ultimately extends to the concrete.
If steam curing is applied to the concrete, the heat in the curing process causes the plastic to expand while the concrete is relatively fresh and weak. After reaching the maximum curing temperature and volume expansion of the plastic, the temperature is held at this level until the concrete reaches the desired strength. After curing, the subsequent lower temperatures cause the plastic to contract, and a gap remains at the interface between the plastic and concrete.
Plastic spacers are also subject to corrosion when they come into contact with chlorides and chemicals, whereas concrete has a much higher resistance. [ 2 ]
Concrete spacers and bar supports are often made of the same material properties as the poured concrete, so thermal expansion and contraction are equal. As a result, the concrete and spacers will bond without gaps. Often these spacers are manufactured from extruded fiber-reinforced concrete which improves crack resistance. [ 3 ]
Concrete spacers and bar support help maintain material integrity and uniformity of the concrete, and provide a cover over the reinforcement that protects against corrosion.
Concrete spacers and bar supports help maintain material integrity and uniformity of the concrete. [ 1 ] They provide a cover over the reinforcement that protects against corrosion.
Concrete spacers with a plastic clip or fixing mechanism do not have a negative effect on material integrity and do not weaken the corrosion protective cover over the reinforcement.
The plastic clip or fixing mechanism is hinged from the top of the spacer and does not come into contact with the soffit of the concrete. The plastic clip or fastening mechanism is incorporated at a depth of merely 5 mm into the spacer, thereby preserving the material's integrity at the surface of the product.
This plastic component in the clip or fastening mechanism serves exclusively for attachment and securing of the reinforcement, allowing the concrete segment to fulfil the spacer's functional role. | https://en.wikipedia.org/wiki/Rebar_spacer |
Rebecca Abergel is a professor of nuclear engineering and of chemistry at University of California, Berkeley . [ 1 ] [ 2 ] Abergel is also a senior faculty scientist in the chemical sciences division of Lawrence Berkeley National Laboratory , where she directs the Glenn T. Seaborg Center and leads the Heavy Element Chemistry research group. [ 3 ] She is the recipient of several awards for her research in nuclear and inorganic chemistry.
Her research interests include ligand design and use of spectroscopic characterization methods to study the biological coordination chemistry and toxicity mechanisms of f-elements and inorganic isotopes, especially as applied to decontamination strategies, waste management , remediation, separation, and radiopharmaceutical development. [ 4 ]
Abergel is known for leading the development of new drug products for the treatment of populations contaminated with heavy metals and radionuclides. [ 5 ] Clinical development and commercialization of these products are now spearheaded by HOPO Therapeutics, which she co-founded. [ 6 ]
Abergel was born in Caracas , Venezuela and grew up in Paris , France. She attended the École Normale Supérieure of Paris for her undergraduate degree, where she studied chemistry. While an undergraduate, she received a scholarship to work in the laboratory of Prof. John Arnold at the University of California, Berkeley . [ 7 ] She remained at UC Berkeley to conduct her graduate studies, under the supervision of Prof. Ken Raymond . Her doctoral work focused on the synthesis and characterization of siderophore analogs to probe microbial iron transport systems and to develop new iron chelating agents. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] After earning her PhD in inorganic chemistry, [ 2 ] Abergel pursued postdoctoral research in the UC Berkeley Department of Chemistry and the group of Prof. Roland Strong at the Fred Hutchinson Cancer Research Center . There she investigated the bacteriostatic function of the innate immune protein siderocalin in binding siderophores from pathogenic microorganisms such as Bacillus anthracis , for the development of new antibiotics. [ 13 ] [ 14 ] [ 15 ]
Abergel began her independent career at Berkeley Lab in 2009. She joined the Nuclear Engineering Department of UC Berkeley in 2018 [ 16 ] and became the Heavy Element Chemistry Group Leader and Glenn T. Seaborg Center Director at Berkeley Lab that same year. In 2023, she joined the UC Berkeley Chemistry Department and became Associate Dean of the College of Engineering. [ 17 ] [ 18 ] | https://en.wikipedia.org/wiki/Rebecca_Abergel |
The Rebis (from the Latin res bina , meaning dual or double matter) is the end product of the alchemical magnum opus or great work.
After one has gone through the stages of putrefaction and purification , separating opposing qualities, those qualities are united once more in what is sometimes described as the divine hermaphrodite , a reconciliation of spirit and matter, a being of both male and female qualities as indicated by the male and female head within a single body. The sun and moon correspond to the male and female halves, just as the Red King and White Queen are similarly associated.
The Rebis image appeared in the work Azoth of the Philosophers by Basil Valentine in 1613. | https://en.wikipedia.org/wiki/Rebis |
Reboilers are heat exchangers typically used to provide heat to the bottom of industrial distillation columns. They boil the liquid from the bottom of a distillation column to generate vapors which are returned to the column to drive the distillation separation . The heat supplied to the column by the reboiler at the bottom of the column is removed by the condenser at the top of the column.
Proper reboiler operation is vital to effective distillation. In a typical classical distillation column, all the vapor driving the separation comes from the reboiler. The reboiler receives a liquid stream from the column bottom and may partially or completely vaporize that stream. Steam usually provides the heat required for the vaporization.
The most critical element of reboiler design is the selection of the proper type of reboiler for a specific service. Most reboilers are of the shell and tube heat exchanger type and normally steam is used as the heat source in such reboilers. However, other heat transfer fluids like hot oil or Dowtherm (TM) may be used. Fuel-fired furnaces may also be used as reboilers in some cases.
Commonly used heat exchanger type reboilers are:
Kettle reboilers ( Image 1) are simple and reliable heat exchangers, often used in distillation columns. They function similarly to shell-and-tube heat exchangers but are specifically designed to provide a stable liquid level and maintain natural circulation. In this design, steam flows through a tube bundle, condenses, and exits as condensate. The liquid from the bottom of the column, known as the bottoms, flows through the shell side of the reboiler. Depending on the design, the flow of this liquid may be driven by gravity or require pumping. A retaining wall or overflow weir separates the tube bundle from the reboiler section where the bottoms product is withdrawn, ensuring that the tube bundle remains covered with liquid to reduce the loss of low-boiling compounds. This configuration minimizes the risk of contamination in the final product.
Thermosyphon reboilers (Image 2) do not require pumping of the column bottoms liquid into the reboiler. Natural circulation is obtained by using the density difference between the reboiler inlet column bottoms liquid and the reboiler outlet liquid-vapor mixture to provide sufficient liquid head to deliver the tower bottoms into the reboiler. Thermosyphon reboilers (also known as calandrias ) are more complex than kettle reboilers and require more attention from the plant operators. There are many types of thermosyphon reboilers including vertical, horizontal, once-through or recirculating.
Fired heaters (Image 3), also known as furnaces, may be used as a distillation column reboiler. A pump is required to circulate the column bottoms through the heat transfer tubes in the furnace's convection and radiant sections. The heat source for the fired heater reboiler may be either fuel gas or fuel oil.
A forced circulation reboiler (Image 4) uses a pump to circulate the column bottoms liquid through the reboilers. This is useful if the reboiler must be located far from the column, or if the bottoms product is extremely viscous.
Some fluids are temperature sensitive such as those subject to polymerization by contact with high temperature heat transfer tube walls. High liquid recirculation rates are used to reduce tube wall temperatures, thereby reducing polymerization on the tube and associated fouling. | https://en.wikipedia.org/wiki/Reboiler |
In computing , rebooting is the process by which a running computer system is restarted, either intentionally or unintentionally. Reboots can be either a cold reboot (alternatively known as a hard reboot ) in which the power to the system is physically turned off and back on again (causing an initial boot of the machine); or a warm reboot (or soft reboot ) in which the system restarts while still powered up. The term restart (as a system command) is used to refer to a reboot when the operating system closes all programs and finalizes all pending input and output operations before initiating a soft reboot.
Early electronic computers (like the IBM 1401 ) had no operating system and little internal memory. The input was often a stack of punch cards or via a switch register . On systems with cards, the computer was initiated by pressing a start button that performed a single command - "read a card". This first card then instructed the machine to read more cards that eventually loaded a user program. This process was likened to an old saying, " picking yourself up by the bootstraps ", referring to a horseman who lifts himself off the ground by pulling on the straps of his boots. This set of initiating punch cards was called "bootstrap cards". Thus a cold start was called booting the computer up. If the computer crashed , it was rebooted. The boot reference carried over to all subsequent types of computers.
For IBM PC compatible computers, a cold boot is a boot process in which the computer starts from a powerless state, in which the system performs a complete power-on self-test (POST). [ 1 ] [ 2 ] [ 3 ] [ 4 ] Both the operating system and third-party software can initiate a cold boot; the restart command in Windows 9x initiates a cold reboot, unless Shift key is held. [ 1 ] : 509
A warm boot is initiated by the BIOS , either as a result of the Control-Alt-Delete key combination [ 1 ] [ 2 ] [ 3 ] [ 4 ] or directly through BIOS interrupt INT 19h. [ 5 ] It may not perform a complete POST - for example, it may skip the memory test - and may not perform a POST at all. [ 1 ] [ 2 ] [ 4 ] Malware may prevent or subvert a warm boot by intercepting the Ctrl + Alt + Delete key combination and prevent it from reaching BIOS. [ 6 ] The Windows NT family of operating systems also does the same and reserves the key combination for its own use. [ 7 ] [ 8 ]
Operating systems based on Linux support an alternative to warm boot; the Linux kernel has optional support for kexec , a system call which transfers execution to a new kernel and skips hardware or firmware reset. The entire process occurs independently of the system firmware. The kernel being executed does not have to be a Linux kernel. [ citation needed ]
Outside the domain of IBM PC compatible computers, the types of boot may not be as clear. According to Sue Loh of Windows CE Base Team, Windows CE devices support three types of boots: Warm, cold and clean. [ 9 ] A warm boot discards program memory. A cold boot additionally discards storage memory (also known as the "object store"), while a clean boot erases all forms of memory storage from the device. However, since these areas do not exist on all Windows CE devices, users are only concerned with two forms of reboot: one that resets the volatile memory and one that wipes the device clean and restores factory settings. For example, for a Windows Mobile 5.0 device, the former is a cold boot and the latter is a clean boot. [ 9 ]
A hard reboot means that the system is not shut down in an orderly manner, skipping file system synchronisation and other activities that would occur on an orderly shutdown. This can be achieved by either applying a reset , by cycling power , by issuing the halt -q command in most Unix-like systems, or by triggering a kernel panic .
Hard reboots are used in the cold boot attack .
The term "restart" is used by the Microsoft Windows and Linux families of operating systems to denote an operating system-assisted reboot. In a restart, the operating system ensures that all pending I/O operations are gracefully ended before commencing a reboot.
Users may deliberately initiate a reboot. Rationale for such action may include:
The means of performing a deliberate reboot also vary and may include:
Unexpected loss of power for any reason (including power outage , power supply failure or depletion of battery on a mobile device) forces the system user to perform a cold boot once the power is restored. Some BIOSes have an option to automatically boot the system after a power failure. [ 23 ] [ 24 ] An uninterruptible power supply (UPS), backup battery or redundant power supply can prevent such circumstances.
"Random reboot" is a non-technical term referring to an unintended (and often undesired) reboot following a system crash , whose root cause may not immediately be evident to the user. Such crashes may occur due to a multitude of software and hardware problems, such as triple faults . They are generally symptomatic of an error in ring 0 that is not trapped by an error handler in an operating system or a hardware-triggered non-maskable interrupt .
Systems may be configured to reboot automatically after a power failure, or a fatal system error or kernel panic . The method by which this is done varies depending on whether the reboot can be handled via software or must be handled at the firmware or hardware level. Operating systems in the Windows NT family (from Windows NT 3.1 through Windows 7 ) have an option to modify the behavior of the error handler so that a computer immediately restarts rather than displaying a Blue Screen of Death (BSOD) error message. This option is enabled by default in some editions.
The introduction of advanced power management allowed operating systems greater control of hardware power management features. With Advanced Configuration and Power Interface (ACPI), newer operating systems are able to manage different power states and thereby sleep and/or hibernate . While hibernation also involves turning a system off then subsequently back on again, the operating system does not start from scratch, thereby differentiating this process from rebooting.
A reboot may be simulated by software running on an operating system. For example: the Sysinternals BlueScreen utility, which is used for pranking; or some modes of the bsod XScreenSaver "hack", for entertainment (albeit possibly concerning at first glance). Malware may also simulate a reboot, and thereby deceive a computer user for some nefarious purpose. [ 6 ]
Microsoft App-V sequencing tool captures all the file system operations of an installer in order to create a virtualized software package for users. As part of the sequencing process, it will detect when an installer requires a reboot, interrupt the triggered reboot, and instead simulate the required reboot by restarting services and loading/unloading libraries. [ 25 ]
Windows 8 & 10 enable (by default) a hibernation -like "Fast Startup" (a.k.a. "Fast Boot") which can cause problems (including confusion) for users accustomed to turning off computers to (cold) reboot them. [ 26 ] [ 27 ] [ 28 ] | https://en.wikipedia.org/wiki/Reboot |
In materials science , hardness (antonym: softness ) is a measure of the resistance to localized plastic deformation , such as an indentation (over an area) or a scratch (linear), induced mechanically either by pressing or abrasion . In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin , or wood and common plastics . Macroscopic hardness is generally characterized by strong intermolecular bonds , but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness , indentation hardness , and rebound hardness. Hardness is dependent on ductility , elastic stiffness , plasticity , strain , strength , toughness , viscoelasticity , and viscosity . Common examples of hard matter are ceramics , concrete , certain metals , and superhard materials , which can be contrasted with soft matter .
There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between one scale and another.
Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due to friction from a sharp object. [ 1 ] The principle is that an object made of a harder material will scratch an object made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through the film to the substrate. The most common test is Mohs scale , which is used in mineralogy . One tool to make this measurement is the sclerometer .
Another tool used to make these tests is the pocket hardness tester . This tool consists of a scale arm with graduated markings attached to a four-wheeled carriage. A scratch tool with a sharp rim is mounted at a predetermined angle to the testing surface. In order to use it a weight of known mass is added to the scale arm at one of the graduated markings, the tool is then drawn across the test surface. The use of the weight and markings allows a known pressure to be applied without the need for complicated machinery. [ 2 ]
Indentation hardness measures the resistance of a sample to material deformation due to a constant compression load from a sharp object. Tests for indentation hardness are primarily used in engineering and metallurgy . The tests work on the basic premise of measuring the critical dimensions of an indentation left by a specifically dimensioned and loaded indenter. Common indentation hardness scales are Rockwell , Vickers , Shore , and Brinell , amongst others.
Rebound hardness , also known as dynamic hardness , measures the height of the "bounce" of a diamond-tipped hammer dropped from a fixed height onto a material. This type of hardness is related to elasticity . The device used to take this measurement is known as a scleroscope . [ 3 ] Two scales that measures rebound hardness are the Leeb rebound hardness test and Bennett hardness scale. Ultrasonic Contact Impedance (UCI) method determines hardness by measuring the frequency of an oscillating rod. The rod consists of a metal shaft with vibrating element and a pyramid-shaped diamond mounted on one end. [ 4 ]
There are five hardening processes: Hall-Petch strengthening , work hardening , solid solution strengthening , precipitation hardening , and martensitic transformation .
In solid mechanics , solids generally have three responses to force , depending on the amount of force and the type of material:
Strength is a measure of the extent of a material's elastic range, or elastic and plastic ranges together. This is quantified as compressive strength , shear strength , tensile strength depending on the direction of the forces involved. Ultimate strength is an engineering measure of the maximum load a part of a specific material and geometry can withstand.
Brittleness , in technical usage, is the tendency of a material to fracture with very little or no detectable plastic deformation beforehand. Thus in technical terms, a material can be both brittle and strong. In everyday usage "brittleness" usually refers to the tendency to fracture under a small amount of force, which exhibits both brittleness and a lack of strength (in the technical sense). For perfectly brittle materials, yield strength and ultimate strength are the same, because they do not experience detectable plastic deformation. The opposite of brittleness is ductility .
The toughness of a material is the maximum amount of energy it can absorb before fracturing, which is different from the amount of force that can be applied. Toughness tends to be small for brittle materials, because elastic and plastic deformations allow materials to absorb large amounts of energy.
Hardness increases with decreasing particle size . This is known as the Hall-Petch relationship . However, below a critical grain-size, hardness decreases with decreasing grain size. This is known as the inverse Hall-Petch effect.
Hardness of a material to deformation is dependent on its microdurability or small-scale shear modulus in any direction, not to any rigidity or stiffness properties such as its bulk modulus or Young's modulus . Stiffness is often confused for hardness. [ 5 ] [ 6 ] Some materials are stiffer than diamond (e.g. osmium) but are not harder, and are prone to spalling and flaking in squamose or acicular habits.
The key to understanding the mechanism behind hardness is understanding the metallic microstructure , or the structure and arrangement of the atoms at the atomic level. In fact, most important metallic properties critical to the manufacturing of today’s goods are determined by the microstructure of a material. [ 7 ] At the atomic level, the atoms in a metal are arranged in an orderly three-dimensional array called a crystal lattice . In reality, however, a given specimen of a metal likely never contains a consistent single crystal lattice. A given sample of metal will contain many grains, with each grain having a fairly consistent array pattern. At an even smaller scale, each grain contains irregularities.
There are two types of irregularities at the grain level of the microstructure that are responsible for the hardness of the material. These irregularities are point defects and line defects. A point defect is an irregularity located at a single lattice site inside of the overall three-dimensional lattice of the grain. There are three main point defects. If there is an atom missing from the array, a vacancy defect is formed. If there is a different type of atom at the lattice site that should normally be occupied by a metal atom, a substitutional defect is formed. If there exists an atom in a site where there should normally not be, an interstitial defect is formed. This is possible because space exists between atoms in a crystal lattice. While point defects are irregularities at a single site in the crystal lattice, line defects are irregularities on a plane of atoms. Dislocations are a type of line defect involving the misalignment of these planes. In the case of an edge dislocation, a half plane of atoms is wedged between two planes of atoms. In the case of a screw dislocation two planes of atoms are offset with a helical array running between them. [ 8 ]
In glasses, hardness seems to depend linearly on the number of topological constraints acting between the atoms of the network. [ 9 ] Hence, the rigidity theory has allowed predicting hardness values with respect to composition.
Dislocations provide a mechanism for planes of atoms to slip and thus a method for plastic or permanent deformation. [ 7 ] Planes of atoms can flip from one side of the dislocation to the other effectively allowing the dislocation to traverse through the material and the material to deform permanently. The movement allowed by these dislocations causes a decrease in the material's hardness.
The way to inhibit the movement of planes of atoms, and thus make them harder, involves the interaction of dislocations with each other and interstitial atoms. When a dislocation intersects with a second dislocation, it can no longer traverse through the crystal lattice. The intersection of dislocations creates an anchor point and does not allow the planes of atoms to continue to slip over one another [ 10 ] A dislocation can also be anchored by the interaction with interstitial atoms. If a dislocation comes in contact with two or more interstitial atoms, the slip of the planes will again be disrupted. The interstitial atoms create anchor points, or pinning points, in the same manner as intersecting dislocations.
By varying the presence of interstitial atoms and the density of dislocations, a particular metal's hardness can be controlled. Although seemingly counter-intuitive, as the density of dislocations increases, there are more intersections created and consequently more anchor points. Similarly, as more interstitial atoms are added, more pinning points that impede the movements of dislocations are formed. As a result, the more anchor points added, the harder the material will become.
Careful note should be taken of the relationship between a hardness number and the stress-strain curve exhibited by the material. The latter, which is conventionally obtained via tensile testing , captures the full plasticity response of the material (which is in most cases a metal). It is in fact a dependence of the (true) von Mises plastic strain on the (true) von Mises stress , but this is readily obtained from a nominal stress – nominal strain curve (in the pre- necking regime), which is the immediate outcome of a tensile test. This relationship can be used to describe how the material will respond to almost any loading situation, often by using the Finite Element Method (FEM). This applies to the outcome of an indentation test (with a given size and shape of indenter, and a given applied load).
However, while a hardness number thus depends on the stress-strain relationship, inferring the latter from the former is far from simple and is not attempted in any rigorous way during conventional hardness testing. (In fact, the Indentation Plastometry technique, which involves iterative FEM modelling of an indentation test, does allow a stress-strain curve to be obtained via indentation, but this is outside the scope of conventional hardness testing.) A hardness number is just a semi-quantitative indicator of the resistance to plastic deformation. Although hardness is defined in a similar way for most types of test – usually as the load divided by the contact area – the numbers obtained for a particular material are different for different types of test, and even for the same test with different applied loads. Attempts are sometimes made [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] to identify simple analytical expressions that allow features of the stress-strain curve, particularly the yield stress and Ultimate Tensile Stress (UTS), to be obtained from a particular type of hardness number. However, these are all based on empirical correlations, often specific to particular types of alloy: even with such a limitation, the values obtained are often quite unreliable. The underlying problem is that metals with a range of combinations of yield stress and work hardening characteristics can exhibit the same hardness number. The use of hardness numbers for any quantitative purpose should, at best, be approached with considerable caution. | https://en.wikipedia.org/wiki/Rebound_hardness |
RecLOH is a term in genetics that is an abbreviation for " Recombinant Loss of Heterozygosity ".
This is a type of mutation which occurs with DNA by recombination . From a pair of equivalent ("homologous"), but slightly different ( heterozygous ) genes, a pair of identical genes results. In this case there is a non-reciprocal exchange of genetic code between the chromosomes, in contrast to chromosomal crossover , because genetic information is lost.
In genetic genealogy , the term is used particularly concerning similar seeming events in Y chromosome DNA. This type of mutation happens within one chromosome, and does not involve a reciprocal transfer. Rather, one homologous segment "writes over" the other. The mechanism is presumed to be different from RecLOH events in autosomal chromosomes , since the target is the very same chromosome instead of the homologous one.
During the mutation one of these copies overwrites the other. Thus the differences between the two are lost. Because differences are lost, heterozygosity is lost.
Recombination on the Y-chromosome does not only take place during meiosis , but virtually at every mitosis when the Y chromosome condenses, because it doesn't require pairing between chromosomes. Recombination frequency even exceeds the frame shift mutation frequency ( slipped strand mispairing ) of (average fast) Y-STRs , however many recombination products may lead to infertile germ cells and "daughter out".
Recombination events (RecLOH) can be observed if YSTR databases are searched for twin alleles at 3 or more duplicated markers on the same palindrome ( hairpin ).
E.g. DYS459, DYS464 and DYS724 (CDY) are located on the same palindrome P1. A high proportion of 9-9, 15-15-17-17, 36-36 combinations and similar twin allelic patterns will be found. PCR typing technologies have been developed (e.g. DYS464X ) that are able to verify that there are most frequently really two alleles of each, so we can be sure that there is no gene deletion . Family genealogies have proven many times, that parallel changes on all markers located on the same palindrome are frequently observed and the result of those changes are always twin alleles. So a 9–10, 15-16-17-17, 36-38 haplotype can change in one recombination event to the one mentioned above, because all three markers ( DYS459, DYS464 and DYS724 ) are affected by one and the same recLOH event. | https://en.wikipedia.org/wiki/RecLOH |
RecQ helicase is a family of helicase enzymes initially found in Escherichia coli [ 1 ] that has been shown to be important in genome maintenance. [ 2 ] [ 3 ] [ 4 ] They function through catalyzing the reaction ATP + H 2 O → ADP + P and thus driving the unwinding of paired DNA and translocating in the 3' to 5' direction. These enzymes can also drive the reaction NTP + H 2 O → NDP + P to drive the unwinding of either DNA or RNA .
In prokaryotes RecQ is necessary for plasmid recombination and DNA repair from UV-light, free radicals, and alkylating agents. This protein can also reverse damage from replication errors. In eukaryotes, replication does not proceed normally in the absence of RecQ proteins, which also function in aging, silencing, recombination and DNA repair. [ citation needed ]
RecQ family members share three regions of conserved protein sequence referred to as the:
The removal of the N-terminal residues (Helicase and, RecQ-Ct domains) impairs both helicase and ATPase activity but has no effect on the binding ability of RecQ implying that the N-terminus functions as the catalytic end. Truncations of the C-terminus (HRDC domain) compromise the binding ability of RecQ but not the catalytic function. The importance of RecQ in cellular functions is exemplified by human diseases, which all lead to genomic instability and a predisposition to cancer. [ citation needed ]
There are at least five human RecQ genes; and mutations in three human RecQ genes are implicated in heritable human diseases: WRN gene in Werner syndrome (WS), BLM gene in Bloom syndrome (BS), and RECQL4 in Rothmund–Thomson syndrome . [ 5 ] These syndromes are characterized by premature aging, and can give rise to the diseases of cancer , type 2 diabetes , osteoporosis , and atherosclerosis , which are commonly found in old age. These diseases are associated with high incidence of chromosomal abnormalities, including chromosome breaks, complex rearrangements, deletions and translocations, site specific mutations , and in particular sister chromatid exchanges (more common in BS) that are believed to be caused by a high level of somatic recombination. [ citation needed ]
The proper function of RecQ helicases requires the specific interaction with topoisomerase III (Top 3). Top 3 changes the topological status of DNA by binding and cleaving single stranded DNA and passing either a single stranded or a double stranded DNA segment through the transient break and finally re-ligating the break. The interaction of RecQ helicase with topoisomerase III at the N-terminal region is involved in the suppression of spontaneous and damage induced recombination and the absence of this interaction results in a lethal or very severe phenotype. The emerging picture clearly is that RecQ helicases in concert with Top 3 are involved in maintaining genomic stability and integrity by controlling recombination events, and repairing DNA damage in the G2-phase of the cell cycle. The importance of RecQ for genomic integrity is exemplified by the diseases that arise as a consequence of mutations or malfunctions in RecQ helicases; thus it is crucial that RecQ is present and functional to ensure proper human growth and development. [ citation needed ]
The Werner syndrome ATP-dependent helicase (WRN helicase) is unusual among RecQ DNA family helicases in having an additional exonuclease activity. WRN interacts with DNA-PKcs and the Ku protein complex. This observation, combined with evidence that WRN deficient cells produce extensive deletions at sites of joining of non-homologous DNA ends, suggests a role for WRN protein in the DNA repair process of non-homologous end joining (NHEJ). [ 6 ] WRN also physically interacts with the major NHEJ factor X4L4 ( XRCC4 - DNA ligase 4 complex). [ 7 ] X4L4 stimulates WRN exonuclease activity that likely facilitates DNA end processing prior to final ligation by X4L4. [ 7 ]
WRN also appears to play a role in resolving recombination intermediate structures during homologous recombinational repair (HRR) of DNA double-strand breaks. [ 6 ]
WRN participates in a complex with RAD51 , RAD54, RAD54B and ATR proteins in carrying out the recombination step during inter-strand DNA cross-link repair. [ 8 ]
Evidence was presented that WRN plays a direct role in the repair of methylation induced DNA damage . The process likely involves the helicase and exonuclease activities of WRN that operate together with DNA polymerase beta in long patch base excision repair . [ 9 ]
WRN was found to have a specific role in preventing or repairing DNA damages resulting from chronic oxidative stress , particularly in slowly replicating cells. [ 10 ] This finding suggested that WRN may be important in dealing with oxidative DNA damages that underlie normal aging [ 10 ] (see DNA damage theory of aging ).
Cells from humans with Bloom syndrome are sensitive to DNA damaging agents such as UV and methyl methanesulfonate [ 11 ] indicating deficient DNA repair capability.
The budding yeast Saccharomyces cerevisiae encodes an ortholog of the Bloom syndrome (BLM) protein that is designated Sgs1 (Small growth suppressor 1). Sgs1(BLM) is a helicase that functions in homologous recombinational repair of DNA double-strand breaks. The Sgs1(BLM) helicase appears to be a central regulator of most of the recombination events that occur during S. cerevisiae meiosis . [ 12 ] During normal meiosis Sgs1(BLM) is responsible for directing recombination towards the alternate formation of either early non-crossovers or Holliday junction joint molecules, the latter being subsequently resolved as crossovers . [ 12 ]
In the plant Arabidopsis thaliana , homologs of the Sgs1(BLM) helicase act as major barriers to meiotic crossover formation. [ 13 ] These helicases are thought to displace the invading strand allowing its annealing with the other 3'overhang end of the double-strand break, leading to non-crossover recombinant formation by a process called synthesis-dependent strand annealing (SDSA) (see Wikipedia article " Genetic recombination "). It is estimated that only about 5% of double-strand breaks are repaired by crossover recombination. Sequela-Arnaud et al. [ 13 ] suggested that crossover numbers are restricted because of the long-term costs of crossover recombination, that is, the breaking up of favorable genetic combinations of alleles built up by past natural selection .
In humans, individuals with Rothmund–Thomson syndrome , and carrying the RECQL4 germline mutation , have several clinical features of accelerated aging . These features include atrophic skin and pigment changes, alopecia , osteopenia , cataracts and an increased incidence of cancer . [ 14 ] RECQL4 mutant mice also show features of accelerated aging. [ 15 ]
RECQL4 has a crucial role in DNA end resection that is the initial step required for homologous recombination (HR)-dependent double-strand break repair. [ 16 ] When RECQL4 is depleted, HR-mediated repair and 5' end resection are severely reduced in vivo . RECQL4 also appears to be necessary for other forms of DNA repair including non-homologous end joining , nucleotide excision repair and base excision repair . [ 14 ] The association of deficient RECQL4 mediated DNA repair with accelerated aging is consistent with the DNA damage theory of aging . | https://en.wikipedia.org/wiki/RecQ_helicase |
Recalescence is an increase in temperature that occurs while cooling metal when a change in structure with an increase in entropy occurs. The heat responsible for the change in temperature is due to the change in entropy. When a structure transformation occurs the Gibbs free energy of both structures are more or less the same. Therefore, the process will be exothermic . The heat provided is the latent heat .
Recalescence also occurs after supercooling , when the supercooled liquid suddenly crystallizes , forming a solid but releasing heat in the process. [ 1 ]
This metallurgy -related article is a stub . You can help Wikipedia by expanding it .
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
This chemical process -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Recalescence |
The theory of recapitulation , also called the biogenetic law or embryological parallelism —often expressed using Ernst Haeckel 's phrase " ontogeny recapitulates phylogeny "—is a historical hypothesis that the development of the embryo of an animal, from fertilization to gestation or hatching ( ontogeny ), goes through stages resembling or representing successive adult stages in the evolution of the animal's remote ancestors ( phylogeny ). It was formulated in the 1820s by Étienne Serres based on the work of Johann Friedrich Meckel , after whom it is also known as the Meckel–Serres law .
Since embryos also evolve in different ways , the shortcomings of the theory had been recognized by the early 20th century, and it had been relegated to "biological mythology" [ 1 ] by the mid-20th century. [ 2 ]
Analogies to recapitulation theory have been formulated in other fields, including cognitive development [ 3 ] and music criticism .
The idea of recapitulation was first formulated in biology from the 1790s onwards by the German natural philosophers Johann Friedrich Meckel and Carl Friedrich Kielmeyer , and by Étienne Serres [ 4 ] after which, Marcel Danesi states, it soon gained the status of a supposed biogenetic law. [ 5 ]
The embryological theory was formalised by Serres in 1824–1826, based on Meckel's work, in what became known as the "Meckel-Serres Law". This attempted to link comparative embryology with a "pattern of unification" in the organic world. It was supported by Étienne Geoffroy Saint-Hilaire , and became a prominent part of his ideas. It suggested that past transformations of life could have been through environmental causes working on the embryo, rather than on the adult as in Lamarckism . These naturalistic ideas led to disagreements with Georges Cuvier . The theory was widely supported in the Edinburgh and London schools of higher anatomy around 1830, notably by Robert Edmond Grant , but was opposed by Karl Ernst von Baer 's ideas of divergence , and attacked by Richard Owen in the 1830s. [ 6 ]
Ernst Haeckel (1834–1919) attempted to synthesize the ideas of Lamarckism and Goethe 's Naturphilosophie with Charles Darwin 's concepts. While often seen as rejecting Darwin's theory of branching evolution for a more linear Lamarckian view of progressive evolution, this is not accurate: Haeckel used the Lamarckian picture to describe the ontogenetic and phylogenetic history of individual species, but agreed with Darwin about the branching of all species from one, or a few, original ancestors. [ 8 ] Since early in the twentieth century, Haeckel's "biogenetic law" has been refuted on many fronts. [ 9 ]
Haeckel formulated his theory as "Ontogeny recapitulates phylogeny". The notion later became simply known as the recapitulation theory. Ontogeny is the growth (size change) and development (structure change) of an individual organism; phylogeny is the evolutionary history of a species. Haeckel claimed that the development of advanced species passes through stages represented by adult organisms of more primitive species. [ 9 ] Otherwise put, each successive stage in the development of an individual represents one of the adult forms that appeared in its evolutionary history. [ citation needed ]
For example, Haeckel proposed that the pharyngeal grooves between the pharyngeal arches in the neck of the human embryo not only roughly resembled gill slits of fish, but directly represented an adult "fishlike" developmental stage, signifying a fishlike ancestor. Embryonic pharyngeal slits, which form in many animals when the thin branchial plates separating pharyngeal pouches and pharyngeal grooves perforate, open the pharynx to the outside. Pharyngeal arches appear in all tetrapod embryos: in mammals , the first pharyngeal arch develops into the lower jaw ( Meckel's cartilage ), the malleus and the stapes .
Haeckel produced several embryo drawings that often overemphasized similarities between embryos of related species. Modern biology rejects the literal and universal form of Haeckel's theory, such as its possible application to behavioural ontogeny, i.e. the psychomotor development of young animals and human children. [ 10 ]
Haeckel's theory and drawings were criticised by his contemporary, the anatomist Wilhelm His Sr. (1831–1904), who had developed a rival "causal-mechanical theory" of human embryonic development. [ 11 ] [ 12 ] His's work specifically criticised Haeckel's methodology, arguing that the shapes of embryos were caused most immediately by mechanical pressures resulting from local differences in growth. These differences were, in turn, caused by "heredity". He compared the shapes of embryonic structures to those of rubber tubes that could be slit and bent, illustrating these comparisons with accurate drawings. Stephen Jay Gould noted in his 1977 book Ontogeny and Phylogeny that His's attack on Haeckel's recapitulation theory was far more fundamental than that of any empirical critic, as it effectively stated that Haeckel's "biogenetic law" was irrelevant. [ 13 ] [ 14 ]
Darwin proposed that embryos resembled each other since they shared a common ancestor, which presumably had a similar embryo, but that development did not necessarily recapitulate phylogeny: he saw no reason to suppose that an embryo at any stage resembled an adult of any ancestor. Darwin supposed further that embryos were subject to less intense selection pressure than adults, and had therefore changed less. [ 15 ]
Modern evolutionary developmental biology (evo-devo) follows von Baer, rather than Darwin, in pointing to active evolution of embryonic development as a significant means of changing the morphology of adult bodies. Two of the key principles of evo-devo, namely that changes in the timing ( heterochrony ) and positioning ( heterotopy ) within the body of aspects of embryonic development would change the shape of a descendant's body compared to an ancestor's, were first formulated by Haeckel in the 1870s. These elements of his thinking about development have thus survived, whereas his theory of recapitulation has not. [ 16 ]
The Haeckelian form of recapitulation theory is considered defunct. [ 17 ] Embryos do undergo a period or phylotypic stage where their morphology is strongly shaped by their phylogenetic position, [ 18 ] rather than selective pressures, but that means only that they resemble other embryos at that stage, not ancestral adults as Haeckel had claimed. [ 19 ] The modern view is summarised by the University of California Museum of Paleontology :
Embryos do reflect the course of evolution, but that course is far more intricate and quirky than Haeckel claimed. Different parts of the same embryo can even evolve in different directions. As a result, the Biogenetic Law was abandoned, and its fall freed scientists to appreciate the full range of embryonic changes that evolution can produce—an appreciation that has yielded spectacular results in recent years as scientists have discovered some of the specific genes that control development . [ 20 ]
The idea that ontogeny recapitulates phylogeny has been applied to some other areas.
English philosopher Herbert Spencer was one of the most energetic proponents of evolutionary ideas to explain many phenomena. In 1861, five years before Haeckel first published on the subject, Spencer proposed a possible basis for a cultural recapitulation theory of education with the following claim: [ 21 ]
If there be an order in which the human race has mastered its various kinds of knowledge, there will arise in every child an aptitude to acquire these kinds of knowledge in the same order... Education is a repetition of civilization in little. [ 22 ]
G. Stanley Hall used Haeckel's theories as the basis for his theories of child development. His most influential work, "Adolescence: Its Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion and Education" in 1904 [ 23 ] suggested that each individual's life course recapitulated humanity's evolution from "savagery" to "civilization". Though he has influenced later childhood development theories, Hall's conception is now generally considered racist. [ 24 ] Developmental psychologist Jean Piaget favored a weaker version of the formula, according to which ontogeny parallels phylogeny because the two are subject to similar external constraints. [ 25 ]
The Austrian pioneer of psychoanalysis , Sigmund Freud , also favored Haeckel's doctrine. He was trained as a biologist under the influence of recapitulation theory during its heyday, and retained a Lamarckian outlook with justification from the recapitulation theory. [ 26 ] Freud also distinguished between physical and mental recapitulation, in which the differences would become an essential argument for his theory of neuroses . [ 26 ]
In the late 20th century, studies of symbolism and learning in the field of cultural anthropology suggested that "both biological evolution and the stages in the child's cognitive development follow much the same progression of evolutionary stages as that suggested in the archaeological record". [ 27 ]
The musicologist Richard Taruskin in 2005 applied the phrase "ontogeny becomes phylogeny" to the process of creating and recasting music history, often to assert a perspective or argument. For example, the peculiar development of the works by modernist composer Arnold Schoenberg (here an "ontogeny") is generalized in many histories into a "phylogeny" – a historical development ("evolution") of Western music toward atonal styles of which Schoenberg is a representative. Such historiographies of the "collapse of traditional tonality" are faulted by music historians as asserting a rhetorical rather than historical point about tonality's "collapse". [ 28 ]
Taruskin also developed a variation of the motto into the pun "ontogeny recapitulates ontology" to refute the concept of " absolute music " advancing the socio-artistic theories of the musicologist Carl Dahlhaus . Ontology is the investigation of what exactly something is, and Taruskin asserts that an art object becomes that which society and succeeding generations made of it. For example, Johann Sebastian Bach 's St. John Passion , composed in the 1720s, was appropriated by the Nazi regime in the 1930s for propaganda . Taruskin claims the historical development of the St John Passion (its ontogeny) as a work with an anti-Semitic message does, in fact, inform the work's identity (its ontology), even though that was an unlikely concern of the composer. Music or even an abstract visual artwork can not be truly autonomous ("absolute") because it is defined by its historical and social reception. [ 28 ] | https://en.wikipedia.org/wiki/Recapitulation_theory |
In telecommunications , receive-after-transmit time delay is the time interval between (a) the instant of keying off the local transmitter to stop transmitting and (b) the instant the local receiver output has increased to 90% of its steady-state value in response to an RF signal from another transmitter.
The RF signal from the distant transmitter must exist at the local receiver input prior to, or at the time of, keying off the local transmitter.
Receive-after-transmit time delay applies only to half-duplex operation .
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ). | https://en.wikipedia.org/wiki/Receive-after-transmit_time_delay |
The receiver in information theory is the receiving end of a communication channel . It receives decoded messages / information from the sender, who first encoded them. [ 1 ] Sometimes the receiver is modeled so as to include the decoder. Real-world receivers like radio receivers or telephones can not be expected to receive as much information as predicted by the noisy channel coding theorem . | https://en.wikipedia.org/wiki/Receiver_(information_theory) |
Receiver autonomous integrity monitoring ( RAIM ) is a technology developed to assess the integrity of individual signals collected and integrated by the receiver units employed in a Global Navigation Satellite System (GNSS). The integrity of received signals and resulting correctness and precision of derived receiver location are of special importance in safety-critical GNSS applications, such as in aviation or marine navigation .
The Global Positioning System (GPS) does not include any internal information about the integrity of its signals. It is possible for a GPS satellite to broadcast slightly incorrect information that will cause navigation information to be incorrect, but there is no way for the receiver to determine this using the standard techniques. RAIM uses redundant signals to produce several GPS position fixes and compare them, and a statistical function determines whether or not a fault can be associated with any of the signals. RAIM is considered available if 24 GPS satellites or more are operative. If the number of GPS satellites is 23 or fewer, RAIM availability must be checked using approved ground-based prediction software.
Several GPS-related systems also provide integrity signals separate from GPS. Among these is the WAAS system, which uses separate signals broadcast from different satellites to indicate these problems directly.
RAIM detects faults through redundant GPS pseudorange measurement. That is, when more satellites are available than needed to produce a position fix, the extra pseudoranges should all be consistent with the computed position. A pseudorange that differs significantly from the expected value (i.e., an outlier ) may indicate a fault of the associated satellite or another signal integrity problem (e.g., ionospheric dispersion). Traditional RAIM uses fault detection (FD) only, however newer GPS receivers incorporate fault detection and exclusion (FDE) which enables them to continue to operate in the presence of a GPS failure.
The test statistic used is a function of the pseudorange measurement residual (the difference between the expected measurement and the observed measurement) and the amount of redundancy. The test statistic is compared with a threshold value, which is determined based on the required probability of false alarm (Pfa).
Receiver autonomous integrity monitoring (RAIM) provides integrity monitoring of GPS for aviation applications. In order for a GPS receiver to perform RAIM or fault detection (FD) function, a minimum of five visible satellites with satisfactory geometry must be visible to it. RAIM has various kinds of implementations; one of them performs consistency checks between all position solutions obtained with various subsets of the visible satellites. The receiver provides an alert to the pilot if the consistency checks fail.
RAIM availability is an important issue when using such kind of algorithm in safety-critical applications (such as the aeronautical ones); in fact, because of geometry and satellite service maintenance, RAIM is not always available at all, meaning that the receiver's antenna could have sometimes fewer than five satellites in view.
Availability is also a performance indicator of the RAIM algorithm. Availability is a function of the geometry of the constellation which is in view and of other environmental conditions. If availability is seen in this way it is clear that it is not an on–off feature meaning that the algorithm could be available but not with the required performance of detecting a failure when it happens. So availability is a performance factor of the algorithm and characterizes each one of the different kinds of RAIM algorithms and methodologies.
An enhanced version of RAIM employed in some receivers is known as fault detection and exclusion (FDE). At least one satellite, in addition to those required for navigation, must be in view for the receiver to perform the RAIM function; thus, RAIM needs a minimum of five satellites in view or four satellites and a barometric altimeter (baro-aiding, a method of augmenting the GPS integrity solution by using a non-satellite input source) to detect an integrity anomaly. For receivers capable of doing so, RAIM needs six satellites in view (or five satellites with baro-aiding) to isolate the corrupt satellite signal and remove it from the navigation solution. [ 1 ]
Upon detection, proper fault exclusion determines and excludes the source of the failure (without necessarily identifying the individual source causing the problem), thereby allowing GNSS navigation to continue without interruption. The availability of RAIM and FDE will be slightly lower for mid-latitude operations and slightly higher for equatorial and high-latitude regions due to the nature of the orbits. The use of satellites from multiple GNSS constellations or the use of SBAS satellites as additional ranging sources can improve the availability of RAIM and FDE.
GNSS differs from traditional navigation systems because the satellites and areas of degraded coverage are in constant motion. Therefore, if a satellite fails or is taken out of service for maintenance, it is not immediately clear which areas of the airspace will be affected, if any. The location and duration of these outages can be predicted with the aid of computer analysis and reported to pilots during the pre-flight planning process. This prediction process is, however, not fully representative of all RAIM implementations in the different models of receivers. Prediction tools are usually conservative and thus predict lower availability than that actually encountered in flight to provide protection for the lowest end receiver models.
Because RAIM operates autonomously, that is without the assistance of external signals, it requires redundant pseudorange measurements. To obtain a 3D position solution, at least four measurements are required. To detect a fault, at least 5 measurements are required, and to isolate and exclude a fault, at least six measurements are required, however often more measurements are needed depending on the satellite geometry. Typically there are seven to 12 satellites in view.
The test statistic used is a function of the pseudorange measurement residual (the difference between the expected measurement and the observed measurement) and the amount of redundancy. The test statistic is compared with a threshold value, which is determined based on the requirements for the probability of false alarm (Pfa) and the expected measurement noise. In aviation systems, the Pfa is fixed at 1/15000.
The horizontal integrity limit (HIL) or horizontal protection level (HPL) is a figure which represents the radius of a circle which is centered on the GPS position solution and is guaranteed to contain the true position of the receiver to within the specifications of the RAIM scheme (i.e. which meets the Pfa and Pmd). The HPL is calculated as a function of the RAIM threshold and the satellite geometry at the time of the measurements. The HPL is compared with the horizontal alarm limit (HAL) to determine if RAIM is available.
To enable pilots to quickly determine whether en route or approach level RAIM will be available, the FAA and EUROCONTROL have created "dispatch level" websites that predict RAIM status to meet pre-flight check requirements. | https://en.wikipedia.org/wiki/Receiver_autonomous_integrity_monitoring |
RCMAC stands for recent change memory administration center , sometimes mistakenly called recent change message accounting center , in late 20th century Bell System parlance, or recent change memory administration group (RCMAG). [ 1 ] It is an organization of people in a phone company which is responsible for programming the service and features purchased by residential and business customers into the central office . Generally the term is used only in large US phone companies called Regional Bell Operating Companies (RBOCs) .
Installing a telephone line is a complex process, involving coordinated work on outside plant and inside. Inside plant work includes running a jumper on the main distribution frame and programming the switch. Middle 20th century crossbar switches had no computer, hence the same workers who installed the jumper generally wired the necessary information into switch cross connect translations as well. Records were kept as pencil notations in ledger books or index cards.
Stored program control exchanges in the 1970s had teleprinter channels for entering and verifying translation information, which allowed centralizing these functions. In the 1980s, the resulting conglomeration of Teletype machines were replaced with a more organized system called MARCH which could more easily be coordinated with COSMOS , TIRKS and other operations support systems .
Generally, the existence of the RCMAC organization started with 1A switches from Bell Labs (later Lucent, now known as Alcatel-Lucent), from which the term "recent change memory" originated.
With the introduction of various automation systems, the function of the RCMAC would recently be described as an organization of people in the phone company responsible for programming the service and features of phone service where service orders have failed to follow the automated process, investigate and resolve customer trouble reports possibly related to incorrect programming of service and features, and to support outside plant technicians repairing or installing a customer's phone service. | https://en.wikipedia.org/wiki/Recent_change_memory_administration_center |
In NMR spectroscopy , receptivity refers to the relative detectability of a particular element . Some elements are easily detected, some less so. The receptivity is a function of the abundance of the element's NMR-responsive isotope and that isotope's gyromagnetic ratio (or equivalently, the nuclear magnetic moment ). Some isotopes, tritium for example, have large gyromagnetic ratios but low abundance. Other isotopes, for example 103 Rh , are highly abundant but have low gyromagnetic ratios. Widely used NMR spectroscopies often focus on highly receptive elements: 1 H , 19 F , and 31 P . [ 1 ]
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Receptivity_(NMR) |
Receptor-mediated endocytosis ( RME ), also called clathrin-mediated endocytosis , is a process by which cells absorb metabolites , hormones , proteins – and in some cases viruses – by the inward budding of the plasma membrane ( invagination ). This process forms vesicles containing the absorbed substances and is strictly mediated by receptors on the surface of the cell. Only the receptor-specific substances can enter the cell through this process.
Although receptors and their ligands can be brought into the cell through a few mechanisms (e.g. caveolin and lipid raft ), clathrin -mediated endocytosis remains the best studied. Clathrin-mediated endocytosis of many receptor types begins with the ligands binding to receptors on the cell plasma membrane. The ligand and receptor will then recruit adaptor proteins and clathrin triskelions to the plasma membrane around where invagination will take place. Invagination of the plasma membrane then occurs, forming a clathrin-coated pit. [ 1 ] Other receptors can nucleate a clathrin-coated pit allowing formation around the receptor. A mature pit will be cleaved from the plasma membrane through the use of membrane-binding and fission proteins such as dynamin (as well as other BAR domain proteins), [ 2 ] forming a clathrin-coated vesicle that then uncoats of clathrin and typically fuses to a sorting endosome . Once fused, the endocytosed cargo (receptor and/or ligand) can then be sorted to lysosomal , recycling, or other trafficking pathways. [ 1 ]
The function of receptor-mediated endocytosis is diverse. It is widely used for the specific uptake of certain substances required by the cell (examples include LDL via the LDL receptor or iron via transferrin ). The role of receptor-mediated endocytosis is well recognized to up take downregulation of transmembrane signal transduction but can also promote sustained signal transduction. [ 3 ] The activated receptor becomes internalised and is transported to late endosomes and lysosomes for degradation. However, receptor-mediated endocytosis is also actively implicated in transducing signals from the cell periphery to the nucleus. This became apparent when it was found that the association and formation of specific signaling complexes via clathrin-mediated endocytosis is required for the effective signaling of hormones (e.g. EGF ). Additionally it has been proposed that the directed transport of active signaling complexes to the nucleus might be required to enable signaling, due to the fact that random diffusion is too slow, [ 4 ] and mechanisms permanently downregulating incoming signals are strong enough to shut down signaling completely without additional signal-transducing mechanisms. [ 5 ]
Using fluorescent or EM visible dyes to tag specific molecules in living cells, it is possible to follow the internalization of cargo molecules and the evolution of a clathrin-coated pit by fluorescence microscopy and immuno electron microscopy. [ 6 ] [ 7 ]
Since the process is non-specific, the ligand can be a carrier for larger molecules. If the target cell has a known specific pinocytotic receptor , drugs can be attached and will be internalized.
To achieve internalisation of nanoparticles into cells, such as T cells , antibodies can be used to target the nanoparticles to specific receptors on the cell surface (such as CCR5 ). [ 8 ] This is one method of improving drug delivery to immune cells.
The development of photoswitchable peptide inhibitors of protein-protein interactions involved in clathrin-mediated endocytosis (Traffic Lights peptides) [ 9 ] [ 10 ] [ 11 ] and photoswitchable small molecule inhibitors of dynamin (Dynazos) [ 12 ] has been reported. These photopharmacological compounds allow spatiotemporal control of the endocytosis with light. | https://en.wikipedia.org/wiki/Receptor-mediated_endocytosis |
In biochemistry and pharmacology , receptors are chemical structures, composed of protein , that receive and transduce signals that may be integrated into biological systems. [ 1 ] These signals are typically chemical messengers [ nb 1 ] which bind to a receptor and produce physiological responses , such as a change in the electrical activity of a cell . For example, GABA , an inhibitory neurotransmitter , inhibits electrical activity of neurons by binding to GABA A receptors . [ 2 ] There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. [ 3 ] Relaying sends the signal onward, amplification increases the effect of a single ligand , and integration allows the signal to be incorporated into another biochemical pathway. [ 3 ]
Receptor proteins can be classified by their location. Cell surface receptors , also known as transmembrane receptors, include ligand-gated ion channels , G protein-coupled receptors , and enzyme-linked hormone receptors . [ 1 ] Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors . [ 1 ] A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule , such as a neurotransmitter , hormone , pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine , but it can also be activated by nicotine [ 4 ] [ 5 ] and blocked by curare . [ 6 ] Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys . When a ligand binds to a corresponding receptor, it activates or inhibits the receptor's associated biochemical pathway, which may also be highly specialised.
Receptor proteins can be also classified by the property of the ligands. Such classifications include chemoreceptors , mechanoreceptors , gravitropic receptors , photoreceptors , magnetoreceptors and gasoreceptors.
The structures of receptors are very diverse and include the following major categories, among others:
Membrane receptors may be isolated from cell membranes by complex extraction procedures using solvents , detergents , and/or affinity purification .
The structures and actions of receptors may be studied by using biophysical methods such as X-ray crystallography , NMR , circular dichroism , and dual polarisation interferometry . Computer simulations of the dynamic behavior of receptors have been used to gain understanding of their mechanisms of action.
Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action in the following equation, for a ligand L and receptor, R. The brackets around chemical species denote their concentrations.
One measure of how well a molecule fits a receptor is its binding affinity, which is inversely related to the dissociation constant K d . A good fit corresponds with high affinity and low K d . The final biological response (e.g. second messenger cascade , muscle-contraction), is only achieved after a significant number of receptors are activated.
Affinity is a measure of the tendency of a ligand to bind to its receptor. Efficacy is the measure of the bound ligand to activate its receptor.
Not every ligand that binds to a receptor also activates that receptor. The following classes of ligands exist:
Note that the idea of receptor agonism and antagonism only refers to the interaction between receptors and ligands and not to their biological effects.
A receptor which is capable of producing a biological response in the absence of a bound ligand is said to display "constitutive activity". [ 13 ] The constitutive activity of a receptor may be blocked by an inverse agonist . The anti-obesity drugs rimonabant and taranabant are inverse agonists at the cannabinoid CB1 receptor and though they produced significant weight loss, both were withdrawn owing to a high incidence of depression and anxiety, which are believed to relate to the inhibition of the constitutive activity of the cannabinoid receptor.
The GABA A receptor has constitutive activity and conducts some basal current in the absence of an agonist. This allows beta carboline to act as an inverse agonist and reduce the current below basal levels.
Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors).
Early forms of the receptor theory of pharmacology stated that a drug's effect is directly proportional to the number of receptors that are occupied. [ 14 ] Furthermore, a drug effect ceases as a drug-receptor complex dissociates.
Ariëns & Stephenson introduced the terms "affinity" & "efficacy" to describe the action of ligands bound to receptors. [ 15 ] [ 16 ]
In contrast to the accepted Occupation Theory , Rate Theory proposes that the activation of receptors is directly proportional to the total number of encounters of a drug with its receptors per unit time. Pharmacological activity is directly proportional to the rates of dissociation and association, not the number of receptors occupied: [ 17 ]
As a drug approaches a receptor, the receptor alters the conformation of its binding site to produce drug—receptor complex.
In some receptor systems (e.g. acetylcholine at the neuromuscular junction in smooth muscle), agonists are able to elicit maximal response at very low levels of receptor occupancy (<1%). Thus, that system has spare receptors or a receptor reserve. This arrangement produces an economy of neurotransmitter production and release. [ 12 ]
Cells can increase ( upregulate ) or decrease ( downregulate ) the number of receptors to a given hormone or neurotransmitter to alter their sensitivity to different molecules. This is a locally acting feedback mechanism.
The ligands for receptors are as diverse as their receptors. GPCRs (7TMs) are a particularly vast family, with at least 810 members. There are also LGICs for at least a dozen endogenous ligands, and many more receptors possible through different subunit compositions. Some common examples of ligands and receptors include: [ 19 ]
Some example ionotropic (LGIC) and metabotropic (specifically, GPCRs) receptors are shown in the table below. The chief neurotransmitters are glutamate and GABA; other neurotransmitters are neuromodulatory . This list is by no means exhaustive.
Enzyme linked receptors include Receptor tyrosine kinases (RTKs), serine/threonine-specific protein kinase, as in bone morphogenetic protein and guanylate cyclase, as in atrial natriuretic factor receptor. Of the RTKs, 20 classes have been identified, with 58 different RTKs as members. Some examples are shown below:
Receptors may be classed based on their mechanism or on their position in the cell. 4 examples of intracellular LGIC are shown below:
Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders , where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone.
The main receptors in the immune system are pattern recognition receptors (PRRs), toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors , Fc receptors , B cell receptors and T cell receptors . [ 20 ] | https://en.wikipedia.org/wiki/Receptor_(biochemistry) |
A receptor activated solely by a synthetic ligand ( RASSL ) or designer receptor exclusively activated by designer drugs ( DREADD ), is a class of artificially engineered protein receptors used in the field of chemogenetics which are selectively activated by certain ligands . [ 1 ] They are used in biomedical research, in particular in neuroscience to manipulate the activity of neurons . [ 2 ]
Originally differentiated by the approach used to engineer them, RASSLs and DREADDs are often used interchangeably now to represent an engineered receptor-ligand system. [ 3 ] These systems typically utilize G protein-coupled receptors ( GPCR ) engineered to respond exclusively to synthetic ligands, like clozapine N-oxide (CNO), [ 4 ] and not to endogenous ligands. Several types of these receptors exists, derived from muscarinic or κ-opioid receptors . [ 1 ]
One of the first DREADDs was based on the human M 3 muscarinic receptor (hM 3 ). [ 5 ] Only two point mutations of hM 3 were required to achieve a mutant receptor with nanomolar potency for CNO , insensitivity to acetylcholine and low constitutive activity and this DREADD receptor was named hM3Dq. M 1 and M 5 muscarinic receptors have been mutated to create DREADDs hM1Dq and hM5Dq respectively. [ 5 ]
The most commonly used inhibitory DREADD is hM4Di, derived from the M 4 muscarinic receptor that couples with the G i protein . [ 5 ] Another G i coupled human muscarinic receptor, M 2 , was also mutated to obtain the DREADD receptor hM2D. [ 5 ] Another inhibitory G i -DREADD is the kappa-opioid-receptor (KOR) DREADD (KORD) which is selectively activated by salvinorin B (SalB). [ 6 ]
G s -coupled DREADDs have also been developed. These receptors are also known as G s D and are chimeric receptors containing intracellular regions of the turkey erythrocyte β-adrenergic receptor substituted into the rat M 3 DREADD. [ 7 ]
A growing number of ligands that can be used to activate RASSLs / DREADDs are commercially available. [ 8 ]
CNO is the prototypical DREADD activator. CNO activates the excitatory Gq- coupled DREADDs: hM3Dq, hM1Dq and hM5Dq and also the inhibitory hM4Di and hM2Di G i -coupled DREADDs. CNO also activates the G s -coupled DREADD (GsD) and the β-arrestin preferring DREADD: rM3Darr (Rq(R165L). [ 9 ]
Recent findings suggest that systemically administered CNO does not readily cross the blood-brain-barrier in vivo and converts to clozapine which itself activates DREADDs. Clozapine is an atypical antipsychotic which has been indicated to show high DREADD affinity and potency. Subthreshold injections of clozapine itself can be utilised to induce preferential DREADD-mediated behaviors. Therefore, when using CNO, care must be taken in experimental design and proper controls should be incorporated. [ 10 ]
DREADD agonist 21, also known as Compound 21, represents an alternative agonist for muscarinic-based DREADDs and an alternative to CNO. It has been reported that Compound 21 has excellent bioavailability, pharmacokinetic properties and brain penetrability and does not undergo reverse metabolism to clozapine. [ 11 ] Another known agonist is perlapine , a hypnotic drug approved for treating insomnia in Japan. It acts as an activator of G q -, G i -, and G s DREADDs that has structural similarity to CNO. [ 12 ] A more recent agonist of hM3Dq and hM4Di is deschloroclozapine (DCZ). [ 13 ]
On the other hand, SalB B is a potent and selective activator of KORD. [ 14 ]
JHU37160 and JHU37152 have been marketed commercially as novel DREADD ligands, active in vivo , with high potency and affinity for hM3Dq and hM4Di DREADDs. [ citation needed ]
Dihydrochloride salts of DREADD ligands that are water-soluble (but with differing stabilities in solution) have also been commercially developed (see [ 15 ] [ 16 ] for aqueous stability).
RASSLs and DREADDs are families of designer G-protein-coupled receptors (GPCRs) built specifically to allow for precise spatiotemporal control of GPCR signaling in vivo . These engineered GPCRs are unresponsive to endogenous ligands but can be activated by nanomolar concentrations of pharmacologically inert, drug-like small molecules. Currently, RASSLs exist for the interrogation of several GPCR signaling pathways, including those activated by Gs, Gi, Gq, Golf and β-arrestin. [ 18 ] A major cause for success of RASSL resources has been open exchange of DNA constructs, and RASSL related resources.
hM4Dq-DREADD signals through Gαq/11 G-protein by stimulating phosphlipase C which triggers release calcium from intracellular stores. [ 19 ]
Inhibitory effects of hM4Di-DREADD are a result of the CNO's stimulation which results in inhibition of adenylate cyclase and cAMP . [ 19 ] This leads to activation of the G-protein inwardly rectifying potassium (GIRK) channels. This causes hyperpolarization of the targeted neuronal cell and thus attenuates subsequent activity. [ 20 ]
Gs-DREADDs acts through Gαs G-protein which increases cAMP concentration in cells. [ 21 ]
This chemogenetic technique can be used for remote manipulation of cells, in particular excitable cells like neurons, both in vitro and in vivo with the administration of specific ligands. [ 2 ] Similar techniques in this field include thermogenetics and optogenetics , the control of neurons with temperature or light, respectively. [ 2 ]
Viral expression of DREADD proteins, both in-vivo enhancers and inhibitors of neuronal function, have been used to bidirectionally control behaviors in mice (e.g odor discrimination). [ 22 ] Due to their ability to modulate neuronal activity, DREADDs are used as a tool to evaluate both the neuronal pathways and behaviors associated with drug-cues and drug addiction. [ 23 ]
Conklin and colleagues designed the first GPCR which could be activated only by a synthetic compound [ 24 ] and has gradually been gaining momentum. The first international RASSL meeting was scheduled for April 6, 2006. A simple example of the use of a RASSL system in behavioral genetics was illustrated by Mueller et al. (2005) where they showed that expressing a RASSL receptor in sweet taste cells of the mouse tongue led to a strong preference for oral consumption of the synthetic ligand, whereas expressing the RASSL in bitter taste cells caused dramatic taste aversion for the same compound. [ 25 ]
The attenuating effects of the hM4Di-DREADD were originally explored in 2007, before being confirmed in 2014. [ 20 ] | https://en.wikipedia.org/wiki/Receptor_activated_solely_by_a_synthetic_ligand |
Receptor editing is a process that occurs during the maturation of B cells , which are part of the adaptive immune system. This process forms part of central tolerance to attempt to change the specificity of the antigen receptor of self reactive immature B-cells, in order to rescue them from programmed cell death, called apoptosis . [ 1 ] It is thought that 20-50% of all peripheral naive B cells have undergone receptor editing making it the most common method of removing self reactive B cells. [ 2 ]
During maturation in the bone marrow, B cells are tested for interaction with self antigens, which is called negative selection . If the maturing B cells strongly interact with these self antigens, they undergo death by apoptosis. Negative selection is important to avoid the production of B cells that could cause autoimmune diseases . They can avoid apoptosis by modifying the sequence of light chain V and J genes (components of the antigen receptor) so that they have a different specificity and may not recognize self-antigens anymore. This process of changing the specificity of the immature B cell receptor is called receptor editing. | https://en.wikipedia.org/wiki/Receptor_editing |
A receptor modulator , or receptor ligand , is a general term for a substance, endogenous or exogenous, that binds to and regulates the activity of chemical receptors . They are ligands that can act on different parts of receptors and regulate activity in a positive, negative, or neutral direction with varying degrees of efficacy. Categories of these modulators include receptor agonists and receptor antagonists , as well as receptor partial agonists , inverse agonists , orthosteric modulators, and allosteric modulators , [ 1 ] Examples of receptor modulators in modern medicine include CFTR modulators, [ 2 ] selective androgen receptor modulators (SARMs), and muscarinic ACh receptor modulators.
Currently, receptor modulators are categorized in the Agonist, Partial Agonist, Selective Tissue Modulators, Antagonist, and Inverse Agonist categories in terms of the effect they cause. They are further divided into Orthosteric or Allosteric Modulators according to how they effect said result. Typically, a chemical acts in an agonist fashion whenever it instigates or else facilitates a particular reaction by binding to a particular receptor. In contract, a chemical acts as an antagonist whenever binding to a particular receptor blocks or inhibits a particular response. Between these endpoints exists a gradient defined by a number of variables. One example is Selective Tissue Modulators, which mean a given ligand can behave differently according to the tissue type it is in. As for orthosteric and allosteric modulation, this describes the manner in which the ligand binds to the receptor in question: if it binds directly to the prescribed binding site of a receptor, the ligand is orthosteric in this instance; if the ligand alters the receptor by interacting with it at any place other than a binding site, allosteric interaction occurred. Note that a drug's categorization does not dictate how another drug of the same family could be categorized or whether the same drug may also function in another category. An example is found in medications used to treat opioid addiction, with methadone , buprenorphine , naloxone , and naltrexone all in separate categories or in more than one simultaneously. In addition, depending on the cell type, the specific effect, whether agonist, antagonist, inverse agonist, etc., could have a unique specific effect. An example is seen in insulin , under "Receptor Agonists," as it interacts with multiple different cell types as an agonist, but incites multiple and different responses in both.
A receptor agonist is a chemical that binds to a receptor with the end result of directly inducing a conformational change in the bound receptor and activating a downstream effect. Some common examples are opium derivates, such as heroin and Toll-like receptor agonists. [ 3 ] Heroin functions in this manner, along with other opioids, when bound to μ-opioid receptors . [ 4 ] Opioids' manner of action are both concentration- and receptor-dependent, which provides a key difference between agonists and partial agonists. Another example is insulin , which activates cell receptors to instigate blood glucose uptake. [ 5 ]
Partial agonists are any chemical that can bind to a receptor without eliciting the maximum downstream response as compared to the response from a full agonist. A given partial agonist's affinity for a given receptor is also irrelevant to the consequent effect. An example is buprenorphine , a partial opioid receptor agonist used to treat opioid addictions by directly substituting for them without the same strength of effect.
A receptor antagonist is any given ligand that binds to a receptor in some way without causing any immediate or downstream response, essentially neutralizing the receptor until something with a stronger affinity removes the antagonist or the antagonist itself unbinds. Generally, antagonists can act one of two ways: 1) they can either block the receptors directly, preventing the usual ligand from binding, such as in the case of atropine when it blocks specific acetylcholine receptors to provide important medical benefits. This is competitive antagonism, as they are competing for the same binding sites on the receptor. [ 6 ] The other is by binding to a receptor in a site other than the designated receptor site, inducing a conformational change to prevent the usual ligand(s) from binding and activating a downstream cascade. A commonly-seen and used receptor antagonist is naloxone , another opioid competitive antagonist typically used to treat opioid overdoses by blocking receptors outright. [ 7 ] Further elaboration can be found in "Orthosteric v. Allosteric Modulators."
Inverse agonists differ from regular agonists in that they effect receptors to which a regular agonist binds such that the bound receptors demonstrate reduced activity compared to when they are normally inactive. [ 8 ] In other words, inverse antagonists limit the efficacy of the bound receptor in some way. This is noted to be beneficial in instances wherein expression of receptors or up-regulated receptor sensitivity could be detrimental, thus making suppression of response the best recourse. A handful of examples of inverse agonist use in therapy include β-blockers , antihistamines , ACP-103 to treat Parkinson's disease , hemopressin , drugs to treat obesity , and more besides. [ 9 ] | https://en.wikipedia.org/wiki/Receptor_modulator |
Receptor theory is the application of receptor models to explain drug behavior. [ 1 ] Pharmacological receptor models preceded accurate knowledge of receptors by many years. [ 2 ] John Newport Langley and Paul Ehrlich introduced the concept that receptors can mediate drug action at the beginning of the 20th century. Alfred Joseph Clark was the first to quantify drug-induced biological responses (specifically, f-mediated receptor activation). So far, nearly all of the quantitative theoretical modelling of receptor function has centred on ligand-gated ion channels and G protein-coupled receptors . [ 3 ]
In 1901, Langley challenged the dominant hypothesis that drugs act at nerve endings by demonstrating that nicotine acted at sympathetic ganglia even after the degeneration of the severed preganglionic nerve endings. [ 4 ] In 1905 he introduced the concept of a receptive substance on the surface of skeletal muscle that mediated the action of a drug. Langley postulated that these receptive substances were different in different species (citing the fact that nicotine-induced muscle paralysis in mammals was absent in crayfish). [ 5 ] Around the same time, Ehrlich was trying to understand the basis of selectivity of agents. [ 6 ] He theorized that selectivity was the basis of a preferential distribution of lead and dyes in different body tissues. However, he later modified the theory in order to explain immune reactions and the selectivity of the immune response. [ 6 ] Thinking that selectivity was derived from interaction with the tissues themselves, Ehrlich envisaged molecules extending from cells that the body could use to distinguish and mount an immune response to foreign objects. However, it was only after Ahlquist demonstrated the differential effects of adrenaline on two distinct receptor populations, that the theory of receptor-mediated drug interactions gained acceptance. [ 7 ] [ 8 ]
The receptor occupancy model, which describes agonist and competitive antagonists, was built on the work of Langley, Hill , and Clark. The occupancy model was the first model put forward by Clark to explain the activity of drugs at receptors and quantified the relationship between drug concentration and observed effect. It is based on mass-action kinetics and attempts to link the action of a drug to the proportion of receptors occupied by that drug at equilibrium. [ 9 ] [ 10 ] In particular, the magnitude of the response is directly proportional to the amount of drug bound, and the maximum response would be elicited once all receptors were occupied at equilibrium. He applied mathematical approaches used in enzyme kinetics systematically to the effects of chemicals on tissues. [ 2 ] He showed that for many drugs, the relationship between drug concentration and biological effect corresponded to a hyperbolic curve, similar to that representing the adsorption of a gas onto a metal surface [ 11 ] and fitted the Hill–Langmuir equation . [ 3 ] Clark, together with Gaddum , was the first to introduce the log concentration–effect curve and described the now-familiar 'parallel shift' of the log concentration–effect curve produced by a competitive antagonist. [ 3 ] Attempts to separate the binding phenomenon and activation phenomenon were made by Ariëns in 1954 and by Stephenson in 1956 to account for the intrinsic activity (efficacy) of a drug (that is, its ability to induce an effect after binding). [ 9 ] [ 12 ] [ 13 ] Classic occupational models of receptor activation failed to provide evidence to directly support the idea that receptor occupancy follows a Langmuir curve as the model assumed leading to the development of alternative models to explain drug behaviour. [ 12 ]
The development of the classic theory of drug antagonism by Gaddum, Schild and Arunlakshana built on the work of Langley, Hill and Clark. [ 12 ] Gaddum described a model for the competitive binding of two ligands to the same receptor in short communication to The Physiological Society in 1937. The description referred only to binding, it was not immediately useful for the analysis of experimental measurements of the effects of antagonists on the response to agonists. [ 14 ] It was Heinz Otto Schild who made measurement of the equilibrium constant for the binding of an antagonist possible. He developed the Schild equation to determine a dose ratio, a measure of the potency of a drug. In Schild regression, the change in the dose ratio, the ratio of the EC 50 of an agonist alone compared to the EC 50 in the presence of a competitive antagonist as determined on a dose response curve used to determine the affinity of an antagonist for its receptor.
The flaw in Clark's receptor-occupancy model was that it was insufficient to explain the concept of a partial agonist . This led to the development of agonist models of drug action by Ariens in 1954 and by Stephenson in 1956 to account for the intrinsic activity (efficacy) of a drug (that is, its ability to induce an effect after binding). [ 12 ] [ 13 ]
The two-state model is a simple linear model to describe the interaction between a ligand and its receptor, but also the active receptor (R * ). [ 15 ] The model uses an equilibrium dissociation constant to describe the interaction between ligand and receptor. It proposes that ligand binding results in a change in receptor state from an inactive to an active state based on the receptor's conformation . A receptor in its active state will ultimately elicit its biological response. It was first described by Black and Leff in 1983 as an alternative model of receptor activation. [ 16 ] Similar to the receptor occupancy model, the theory originated from earlier work by del Castillo & Katz on observations relating to ligand-gated ion channels. [ 3 ] In this model, agonists and inverse agonists are thought to have selective binding affinity for the pre-existing resting and active states [ 3 ] [ 17 ] or can induce a conformational change to a different receptor state. Whereas antagonists have no preference in their affinity for a receptor state. [ 18 ] [ 19 ] The fact that receptor conformation (state) would affect binding affinity of a ligand was used to explain a mechanism of partial agonism of receptors by del Castillo & Katz in 1957 was based on their work on the action of acetylcholine at the motor endplate [ 3 ] build on similar work by Wyman & Allen in 1951 on conformational-induced changes in hemoglobin's oxygen binding affinity occurring as a result of oxygen binding. [ 20 ] The del Castillo-Katz mechanism divorces the binding step (that can be made by agonists as well as antagonists) from the receptor activation step (that can be only exerted by agonists), describing them as two independent events. [ 20 ]
The original Ternary complex model was used to describe ligand, receptor, and G-protein interactions. It uses equilibrium dissociation constants for the interactions between the receptor and each ligand (K a for ligand A; K b for ligand B), as well as a cooperativity factor (α) that denotes the mutual effect of the two ligands on each other's affinity for the receptor. An α > 1.0 refers to positive allosteric modulation, an α < 1.0 refers to negative allosteric modulation, and an α = 1.0 means that binding of either ligand to the receptor does not alter the affinity of the other ligand for the receptor (i.e., a neutral modulator). [ 15 ] Further, the α parameter can be added as a subtle but highly useful extension to the ATCM in order to include effects of an allosteric modulator on the efficacy (as distinct from the affinity) of another ligand that binds the receptor, such as the orthosteric agonist. Some ligands can reduce the efficacy but increase the affinity of the orthosteric agonist for the receptor. [ 15 ]
Although it is a simple assumption that the proportional amount of an active receptor state should correlate with the biological response, the experimental evidence for receptor overexpression and spare receptors suggests that the calculation of the net change in the active receptor state is a much better measure for response than is the fractional or proportional change. This is demonstrated by the effects of agonist/ antagonist combinations on the desensitization of receptors. This is also demonstrated by receptors that are activated by overexpression, since this requires a change between R and R* that is difficult to understand in terms of a proportional rather than a net change, and for the molecular model that fits with the mathematical model. [ 21 ] [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Receptor_theory |
In biochemistry , receptor–ligand kinetics is a branch of chemical kinetics in which the kinetic species are defined by different non-covalent bindings and/or conformations of the molecules involved, which are denoted as receptor(s) and ligand(s) . Receptor–ligand binding kinetics also involves the on- and off-rates of binding.
A main goal of receptor–ligand kinetics is to determine the concentrations of the various kinetic species (i.e., the states of the receptor and ligand) at all times, from a given set of initial concentrations and a given set of rate constants. In a few cases, an analytical solution of the rate equations may be determined, but this is relatively rare. However, most rate equations can be integrated numerically, or approximately, using the steady-state approximation . A less ambitious goal is to determine the final equilibrium concentrations of the kinetic species, which is adequate for the interpretation of equilibrium binding data.
A converse goal of receptor–ligand kinetics is to estimate the rate constants and/or dissociation constants of the receptors and ligands from experimental kinetic or equilibrium data. The total concentrations of receptor and ligands are sometimes varied systematically to estimate these constants.
The binding constant is a special case of the equilibrium constant K {\displaystyle K} . It is associated with the binding and unbinding reaction of receptor (R) and ligand (L) molecules, which is formalized as:
The reaction is characterized by the on-rate constant k o n {\displaystyle k_{\rm {on}}} and the off-rate constant k o f f {\displaystyle k_{\rm {off}}} , which have units of 1/(concentration time) and 1/time, respectively. In equilibrium, the forward binding transition R + L ⟶ RL {\displaystyle {\ce {{R}+ {L}-> {RL}}}} should be balanced by the backward unbinding transition RL ⟶ R + L {\displaystyle {\ce {{RL}-> {R}+ {L}}}} . That is,
where [ R ] {\displaystyle {\ce {[{R}]}}} , [ L ] {\displaystyle {\ce {[{L}]}}} and [ RL ] {\displaystyle {\ce {[{RL}]}}} represent the concentration of unbound free receptors, the concentration of unbound free ligand and the concentration of receptor-ligand complexes. The binding constant, or the association constant K a {\displaystyle K_{\rm {a}}} is defined by
The simplest example of receptor–ligand kinetics is that of a single ligand L binding to a single receptor R to form a single complex C
The equilibrium concentrations are related by the dissociation constant K d
where k 1 and k −1 are the forward and backward rate constants , respectively. The total concentrations of receptor and ligand in the system are constant
Thus, only one concentration of the three ([R], [L] and [C]) is independent; the other two concentrations may be determined from R tot , L tot and the independent concentration.
This system is one of the few systems whose kinetics can be determined analytically. [ 1 ] [ 2 ] Choosing [R] as the independent concentration and representing the concentrations by italic variables for brevity (e.g., R = d e f [ R ] {\displaystyle R\ {\stackrel {\mathrm {def} }{=}}\ [{\ce {R}}]} ), the kinetic rate equation can be written
Dividing both sides by k 1 and introducing the constant 2E = R tot - L tot - K d , the rate equation becomes
where the two equilibrium concentrations R ± = d e f E ± D {\displaystyle R_{\pm }\ {\stackrel {\mathrm {def} }{=}}\ E\pm D} are given by the quadratic formula and D is defined
However, only the R + {\displaystyle R_{+}} equilibrium has a positive concentration, corresponding to the equilibrium observed experimentally.
Separation of variables and a partial-fraction expansion yield the integrable ordinary differential equation
whose solution is
or, equivalently,
R ( t ) = R + − g R − 1 − g {\displaystyle R(t)={\frac {R_{+}-gR_{-}}{1-g}}}
for association, and
R ( t ) = R + + g R − 1 + g {\displaystyle R(t)={\frac {R_{+}+gR_{-}}{1+g}}}
for dissociation, respectively; where the integration constant φ 0 is defined
From this solution, the corresponding solutions for the other concentrations C ( t ) {\displaystyle C(t)} and L ( t ) {\displaystyle L(t)} can be obtained. | https://en.wikipedia.org/wiki/Receptor–ligand_kinetics |
Recessional velocity is the rate at which an extragalactic astronomical object recedes (becomes more distant) from an observer as a result of the expansion of the universe . [ 1 ] It can be measured by observing the wavelength shifts of spectral lines emitted by the object, known as the object's cosmological redshift .
Hubble's law is the relationship between a galaxy's distance and its recessional velocity, which is approximately linear for galaxies at distances of up to a few hundred megaparsecs . It can be expressed as
where H 0 {\displaystyle H_{0}} is the Hubble constant , D {\displaystyle D} is the proper distance , v r {\displaystyle v_{r}} is the object's recessional velocity, and v p e c {\displaystyle v_{pec}} is the object's peculiar velocity .
The recessional velocity of a galaxy can be calculated from the redshift observed in its emitted spectrum. One application of Hubble's law is to estimate distances to galaxies based on measurements of their recessional velocities. However, for relatively nearby galaxies the peculiar velocity can be comparable to or larger than the recessional velocity, in which case Hubble's law does not give a good estimate of an object's distance based on its redshift. In some cases (such as the Andromeda Galaxy , 2.5 million light-years away and approaching us at 300 km/s, or even Messier 81 at 12 million light-years away and approaching at 34 km/s) v r {\displaystyle v_{r}} is negative (i.e., the galaxy's spectrum is observed to be blueshifted) as a result of the peculiar velocity.
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Recessional_velocity |
Recipe Markup Language , formerly known as DESSERT ( Document Encoding and Structuring Specification for Electronic Recipe Transfer ), is an XML -based format for marking up recipes . The format was created in 2000 by the company FormatData.
The format provides detailed markup for defining ingredients, which facilitates automated conversions from one type of measurement to another. The markup language also provides for step-based instructions. Metadata can be added to a RecipeML document through the Dublin Core .
Software programs that read and write the RecipeML format include Largo Recipes. [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it .
This cooking article about preparation methods for food and drink is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/RecipeML |
The reciprocal Fibonacci constant ψ is the sum of the reciprocals of the Fibonacci numbers :
ψ = ∑ k = 1 ∞ 1 F k = 1 1 + 1 1 + 1 2 + 1 3 + 1 5 + 1 8 + 1 13 + 1 21 + ⋯ . {\displaystyle \psi =\sum _{k=1}^{\infty }{\frac {1}{F_{k}}}={\frac {1}{1}}+{\frac {1}{1}}+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{8}}+{\frac {1}{13}}+{\frac {1}{21}}+\cdots .}
Because the ratio of successive terms tends to the reciprocal of the golden ratio , which is less than 1, the ratio test shows that the sum converges .
The value of ψ is approximately
ψ = 3.359885666243177553172011302918927179688905133732 … {\displaystyle \psi =3.359885666243177553172011302918927179688905133732\dots } (sequence A079586 in the OEIS ).
With k terms, the series gives O( k ) digits of accuracy. Bill Gosper derived an accelerated series which provides O( k 2 ) digits. [ 1 ] ψ is irrational , as was conjectured by Paul Erdős , Ronald Graham , and Leonard Carlitz , and proved in 1989 by Richard André-Jeannin . [ 2 ]
Its simple continued fraction representation is:
ψ = [ 3 ; 2 , 1 , 3 , 1 , 1 , 13 , 2 , 3 , 3 , 2 , 1 , 1 , 6 , 3 , 2 , 4 , 362 , 2 , 4 , 8 , 6 , 30 , 50 , 1 , 6 , 3 , 3 , 2 , 7 , 2 , 3 , 1 , 3 , 2 , … ] {\displaystyle \psi =[3;2,1,3,1,1,13,2,3,3,2,1,1,6,3,2,4,362,2,4,8,6,30,50,1,6,3,3,2,7,2,3,1,3,2,\dots ]\!\,} (sequence A079587 in the OEIS ).
In analogy to the Riemann zeta function , define the Fibonacci zeta function as ζ F ( s ) = ∑ n = 1 ∞ 1 ( F n ) s = 1 1 s + 1 1 s + 1 2 s + 1 3 s + 1 5 s + 1 8 s + ⋯ {\displaystyle \zeta _{F}(s)=\sum _{n=1}^{\infty }{\frac {1}{(F_{n})^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+{\frac {1}{5^{s}}}+{\frac {1}{8^{s}}}+\cdots } for complex number s with Re( s ) > 0 , and its analytic continuation elsewhere. Particularly the given function equals ψ when s = 1 . [ 3 ]
It was shown that:
This mathematics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reciprocal_Fibonacci_constant |
In evolutionary biology , reciprocal altruism is a behaviour whereby an organism acts in a manner that temporarily reduces its fitness while increasing another organism's fitness, with the expectation that the other organism will act in a similar manner at a later time.
The concept was initially developed by Robert Trivers to explain the evolution of cooperation as instances of mutually altruistic acts. The concept is close to the strategy of " tit for tat " used in game theory . In 1987, Trivers presented at a symposium on reciprocity, noting that he initially titled his article "The Evolution of Delayed Return Altruism," but reviewer W. D. Hamilton suggested renaming it "The Evolution of Reciprocal Altruism." While Trivers adopted the new title, he retained the original examples, causing confusion about reciprocal altruism for decades. Rothstein and Pierotti (1988) addressed this issue at the symposium, proposing new definitions that clarified the concepts. They argued that Delayed Return Altruism was a superior term and introduced "pseudo-reciprocity" to replace it.
The concept of "reciprocal altruism", as introduced by Trivers, suggests that altruism , defined as an act of helping another individual while incurring some cost for this act, could have evolved since it might be beneficial to incur this cost if there is a chance of being in a reverse situation where the individual who was helped before may perform an altruistic act towards the individual who helped them initially. [ 1 ] This concept finds its roots in the work of W.D. Hamilton , who developed mathematical models for predicting the likelihood of an altruistic act to be performed on behalf of one's kin. [ 2 ]
Putting this into the form of a strategy in a repeated prisoner's dilemma would mean to cooperate unconditionally in the first period and behave cooperatively (altruistically) as long as the other agent does as well. [ 1 ] If chances of meeting another reciprocal altruist are high enough, or if the game is repeated for a long enough amount of time, this form of altruism can evolve within a population.
This is close to the notion of " tit for tat " introduced by Anatol Rapoport , [ 3 ] although there still seems a slight distinction in that "tit for tat" cooperates in the first period and from thereon always replicates an opponent's previous action, whereas "reciprocal altruists" stop cooperation in the first instance of non- cooperation by an opponent and stay non-cooperative from thereon. This distinction leads to the fact that in contrast to reciprocal altruism, tit for tat may be able to restore cooperation under certain conditions despite cooperation having broken down.
Christopher Stephens shows a set of necessary and jointly sufficient conditions "... for an instance of reciprocal altruism: [ 4 ]
There are two additional conditions necessary "...for reciprocal altruism to evolve:" [ 4 ]
The first two conditions are necessary for altruism as such, while the third is distinguishing reciprocal altruism from simple mutualism and the fourth makes the interaction reciprocal.
Condition number five is required as otherwise non-altruists may always exploit altruistic behaviour without any consequences and therefore evolution of reciprocal altruism would not be possible. However, it is pointed out that this "conditioning device" does not need to be conscious. Condition number six is required to avoid cooperation breakdown through forward induction—a possibility suggested by game theoretical models. [ 4 ]
In 1987, Trivers told a symposium on reciprocity that he had originally submitted his article under the title "The Evolution of Delayed Return Altruism", but reviewer W. D. Hamilton suggested that he change the title to "The Evolution of Reciprocal Altruism". Trivers changed the title, but not the examples in the manuscript, which has led to confusion about what were appropriate examples of reciprocal altruism for the last 50 years. In their contribution to that symposium, Rothstein and Pierotti (1988) [ 5 ] addressed this issue and proposed new definitions concerning the topic of altruism, that clarified the issue created by Trivers and Hamilton. They proposed that Delayed Return Altruism was a superior concept and used the term pseudo-reciprocity in place of DRA. [ citation needed ]
The following examples could be understood as altruism. However, showing reciprocal altruism in an unambiguous way requires more evidence as will be shown later.
An example of reciprocal altruism is cleaning symbiosis , such as between cleaner fish and their hosts, though cleaners include shrimps and birds, and clients include fish, turtles, octopuses and mammals. [ 6 ] Aside from the apparent symbiosis of the cleaner and the host during actual cleaning, which cannot be interpreted as altruism, the host displays additional behaviour that meets the criteria for delayed return altruism:
The host fish allows the cleaner fish free entrance and exit and does not eat the cleaner, even after the cleaning is done. [ 7 ] [ 8 ] [ 9 ] [ 10 ] The host signals the cleaner it is about to depart the cleaner's locality, even when the cleaner is not in its body. The host sometimes chases off possible dangers to the cleaner. [ 10 ]
The following evidence supports the hypothesis:
The cleaning by cleaners is essential for the host. In the absence of cleaners the hosts leave the locality or suffer from injuries inflicted by ectoparasites . [ 11 ] There is difficulty and danger in finding a cleaner. Hosts leave their element to get cleaned. [ 10 ] Others wait no longer than 30 seconds before searching for cleaners elsewhere. [ 7 ]
A key requirement for the establishment of reciprocal altruism is that the same two individuals must interact repeatedly, as otherwise the best strategy for the host would be to eat the cleaner as soon as cleaning was complete. This constraint imposes both a spatial and a temporal condition on the cleaner and on its host. Both individuals must remain in the same physical location, and both must have a long enough lifespan, to enable multiple interactions. There is reliable evidence that individual cleaners and hosts do indeed interact repeatedly. [ 9 ] [ 11 ] [ 12 ]
This example meets some, but not all, of the criteria described in Trivers's model. In the cleaner-host system the benefit to the cleaner is always immediate. However, the evolution of reciprocal altruism is contingent on opportunities for future rewards through repeated interactions. In one study, nearby host fish observed "cheater" cleaners and subsequently avoided them. [ 13 ] In these examples, true reciprocity is difficult to demonstrate since failure means the death of the cleaner. However, if Randall's claim that hosts sometimes chase off possible dangers to the cleaner is correct, an experiment might be constructed in which reciprocity could be demonstrated. [ 9 ] In actuality this is one of Trivers' examples of Delayed Return Altruism as discussed by Rothstein and Pierotti 1988.
Warning calls, although exposing a bird and putting it in danger, are frequently given by birds. An explanation in terms of altruistic behaviors given by Trivers: [ 1 ]
It has been shown that predators learn specific localities and specialize individually on prey types and hunting techniques. [ 14 ] [ 15 ] [ 16 ] [ 17 ] It is therefore disadvantageous for a bird to have a predator eat a conspecific, because the experienced predator may then be more likely to eat them. Alarming another bird by giving a warning call tends to prevent predators from specializing on the caller's species and locality. In this way, birds in areas in which warning calls are given will be at a selective advantage relative to birds in areas free from warning calls.
Nevertheless, this presentation lacks important elements of reciprocity. It is very hard to detect and ostracize cheaters. There is no evidence that a bird refrains from giving calls when another bird is not reciprocating, nor evidence that individuals interact repeatedly. Given the aforementioned characteristics of bird calling, a continuous bird emigration and immigration environment (true of many avian species) is most likely to be partial to cheaters, since selection against the selfish gene [ 3 ] is unlikely. [ 1 ]
Another explanation for warning calls is that these are not warning calls at all:
A bird, once it has detected a bird of prey, calls to signal to the bird of prey that it was detected, and that there is no use trying to attack the calling bird. Two facts support this hypothesis:
Red-winged blackbird males help defend neighbor's nests. There are many theories as to why males behave this way. One is that males only defend other nests which contain their extra-pair offspring. Extra-pair offspring are juveniles which may contain some of the male bird's DNA. Another is the tit-for-tat strategy of reciprocal altruism. A third theory is, males help only other closely related males. A study done by The Department of Fisheries and Wildlife provided evidence that males used a tit-for-tat strategy. The Department of Fisheries and Wildlife tested many different nests by placing stuffed crows by nests, and then observing behavior of neighboring males. The behaviors they looked for included the number of calls, dives, and strikes. After analyzing the results, there was not significance evidence for kin selection; the presence of extra-pair offspring did not affect the probability of help in nest defense. However, males reduced the amount of defense given to neighbors when neighbor males reduced defense for their nests. This demonstrates a tit-for-tat strategy, where animals help those who previously helped them. This strategy is one type of reciprocal altruism. [ 18 ]
Vampire bats also display reciprocal altruism, as described by Wilkinson. [ 19 ] [ 20 ] The bats feed each other by regurgitating blood. Since bats only feed on blood and will die after just 70 hours of not eating, this food sharing is a great benefit to the receiver and a great cost to the giver. [ 21 ] To qualify for reciprocal altruism, the benefit to the receiver would have to be larger than the cost to the donor. This seems to hold as these bats usually die if they do not find a blood meal two nights in a row. Also, the requirement that individuals who have behaved altruistically in the past are helped by others in the future is confirmed by the data. [ 19 ] However, the consistency of the reciprocal behaviour, namely that a previously non-altruistic bat is refused help when it requires it, has not been demonstrated. Therefore, the bats do not seem to qualify yet as an unequivocal example of reciprocal altruism.
Grooming in primates meets the conditions for reciprocal altruism according to some studies. One of the studies in vervet monkeys shows that among unrelated individuals, grooming induce higher chance of attending to each other's calls for aid. [ 22 ] However, vervet monkeys also display grooming behaviors within group members, displaying alliances. [ 23 ] This would demonstrate vervet monkey's grooming behavior as a part of kin selection since the activity is done between siblings in this study. Moreover, following the criteria by Stephen, [ 4 ] if the study is to be an example of reciprocal altruism, it must prove the mechanism for detecting cheaters.
Numerous species of bacteria engage in reciprocal altruistic behaviors with other species. Typically, this takes the form of bacteria providing essential nutrients for another species, while the other species provides an environment for the bacteria to live in. Reciprocal altruism is exhibited between nitrogen-fixing bacteria and plants in which they reside. Additionally, it can be observed between bacteria and some species of flies such as Bactrocera tryoni . These flies consume nutrient-producing bacteria found on the leaves of plants; in exchange, they reside within the flies' digestive system. [ 24 ] This reciprocal altruistic behavior has been exploited by techniques designed to eliminate B. tryoni , which are fruit fly pests native to Australia. [ 25 ]
Examples of reciprocal altruism in humans include helping injured individuals, sharing food, tools, or knowledge, [ 26 ] and providing assistance in crises with the expectation of future aid. In social interactions, individuals often engage in direct reciprocity, such as returning favors or lending resources with an implicit understanding of future repayment. Indirect reciprocity is also observed, [ 27 ] where individuals help others based on reputation, encouraging mutual cooperation within a community. Economic and political systems rely on reciprocal altruism through trade agreements, diplomatic alliances, [ 28 ] and social contracts, where long-term benefits outweigh short-term costs. Additionally, studies in game theory , such as the Prisoner’s Dilemma , illustrate how cooperative behaviors emerge and stabilize when individuals recognize the advantages of mutual support.
Some animals seem to be unable to develop reciprocal altruism. For example, pigeons defect instead of a random response or a tit-for-tat in a prisoner's dilemma game against a computer. This may be due to favoring short-term thinking over long-term thinking. [ 29 ]
In comparison to that of other animals, the human altruistic system is a sensitive and unstable one. [ 1 ] Therefore, the tendency to give, to cheat, and the response to other's acts of giving and cheating must be regulated by a complex psychology in each individual, social structures, and cultural traditions. Individuals differ in the degree of these tendencies and responses.
According to Trivers, the following emotional dispositions and their evolution can be understood in terms of regulation of altruism. [ 1 ]
It is not known how individuals pick partners as there has been little research on choice. Modeling indicates that altruism about partner choices is unlikely to evolve, as costs and benefits between multiple individuals are variable. [ 30 ] Therefore, the time or frequency of reciprocal actions contributes more to an individual's choice of partner than the reciprocal act itself. | https://en.wikipedia.org/wiki/Reciprocal_altruism |
A reciprocal frame is a class of self-supporting structure made of three or more beams and which requires no center support to create roofs, bridges or similar structures.
Reciprocal roofs tend to be constructed in one of two ways. If built using dimensioned timber, each rafter is usually jointed into the previous one. More commonly, these roofs are constructed with roundwood poles where each rafter is laid upon the previous one. In both of these approaches, the roof is assembled by installing a temporary central support that holds the first rafter at the correct height. The first rafter is fitted between the wall and the temporary central support and then further rafters are added, each resting on the last. The final rafter fits on top of the previous rafter and under the very first one. The rafters are then tied before the temporary support is removed. The structure is most effective at lower pitches where there is minimal spreading force exerted at the ringbeam, most being transferred directly downward. Unless some extra elements are added to create redundancy, the structure is only as strong as the weakest element, as the failure of a single element may lead to the failure of the whole structure.
The reciprocal frame, also known as a Mandala roof, [ 1 ] has been used since the twelfth century in Chinese and Japanese architecture although little or no trace of these ancient methods remain. More recently they were used by architects Kazuhiro Ishii (the Spinning House) and Yasufumi Kijima, and engineer Yoishi Kan
(Kijima Stonemason Museum). [ 2 ]
Villard de Honnecourt produced sketches showing similar designs in the 13th century [ 3 ] and similar structures were also used in the chapter house of Lincoln Cathedral . [ 4 ] Josep Maria Jujol used this structure in both the Casa Bofarull and Casa Negre. [ 5 ] | https://en.wikipedia.org/wiki/Reciprocal_frame |
Reciprocal lattice is a concept associated with solids with translational symmetry which plays a major role in many areas such as X-ray and electron diffraction as well as the energies of electrons in a solid. It emerges from the Fourier transform of the lattice associated with the arrangement of the atoms. The direct lattice or real lattice is a periodic function in physical space , such as a crystal system (usually a Bravais lattice ). The reciprocal lattice exists in the mathematical space of spatial frequencies or wavenumbers k , known as reciprocal space or k space ; it is the dual of physical space considered as a vector space . In other words, the reciprocal lattice is the sublattice which is dual to the direct lattice.
The reciprocal lattice is the set of all vectors G m {\displaystyle \mathbf {G} _{m}} , that are wavevectors k of plane waves in the Fourier series of a spatial function whose periodicity is the same as that of a direct lattice R n {\displaystyle \mathbf {R} _{n}} . Each plane wave in this Fourier series has the same phase or phases that are differed by multiples of 2 π {\displaystyle 2\pi } , at each direct lattice point (so essentially same phase at all the direct lattice points).
The reciprocal lattice of a reciprocal lattice is equivalent to the original direct lattice, because the defining equations are symmetrical with respect to the vectors in real and reciprocal space. Mathematically, direct and reciprocal lattice vectors represent covariant and contravariant vectors , respectively.
The Brillouin zone is a Wigner–Seitz cell of the reciprocal lattice.
Reciprocal space (also called k -space) provides a way to visualize the results of the Fourier transform of a spatial function. It is similar in role to the frequency domain arising from the Fourier transform of a time dependent function; reciprocal space is a space over which the Fourier transform of a spatial function is represented at spatial frequencies or wavevectors of plane waves of the Fourier transform. The domain of the spatial function itself is often referred to as spatial domain or real space. In physical applications, such as crystallography, both real and reciprocal space will often each be two or three dimensional. Whereas the number of spatial dimensions of these two associated spaces will be the same, the spaces will differ in their quantity dimension , so that when the real space has the dimension length ( L ), its reciprocal space will have inverse length , so L −1 (the reciprocal of length).
Reciprocal space comes into play regarding waves, both classical and quantum mechanical. Because a sinusoidal plane wave with unit amplitude can be written as an oscillatory term cos ( k x − ω t + φ 0 ) {\displaystyle \cos(kx-\omega t+\varphi _{0})} , with initial phase φ 0 {\displaystyle \varphi _{0}} , angular wavenumber k {\displaystyle k} and angular frequency ω {\displaystyle \omega } , it can be regarded as a function of both k {\displaystyle k} and x {\displaystyle x} (and the time-varying part as a function of both ω {\displaystyle \omega } and t {\displaystyle t} ). This complementary role of k {\displaystyle k} and x {\displaystyle x} leads to their visualization within complementary spaces (the real space and the reciprocal space). The spatial periodicity of this wave is defined by its wavelength λ {\displaystyle \lambda } , where k λ = 2 π {\displaystyle k\lambda =2\pi } ; hence the corresponding wavenumber in reciprocal space will be k = 2 π / λ {\displaystyle k=2\pi /\lambda } .
In three dimensions, the corresponding plane wave term becomes cos ( k ⋅ r − ω t + φ 0 ) {\displaystyle \cos(\mathbf {k} \cdot \mathbf {r} -\omega t+\varphi _{0})} , which simplifies to cos ( k ⋅ r + φ ) {\displaystyle \cos(\mathbf {k} \cdot \mathbf {r} +\varphi )} at a fixed time t {\displaystyle t} , where r {\displaystyle \mathbf {r} } is the position vector of a point in real space and now k = 2 π e / λ {\displaystyle \mathbf {k} =2\pi \mathbf {e} /\lambda } is the wavevector in the three dimensional reciprocal space. (The magnitude of a wavevector is called wavenumber.) The constant φ {\displaystyle \varphi } is the phase of the wavefront (a plane of a constant phase) through the origin r = 0 {\displaystyle \mathbf {r} =0} at time t {\displaystyle t} , and e {\displaystyle \mathbf {e} } is a unit normal vector to this wavefront. The wavefronts with phases φ + ( 2 π ) n {\displaystyle \varphi +(2\pi )n} , where n {\displaystyle n} represents any integer , comprise a set of parallel planes, equally spaced by the wavelength λ {\displaystyle \lambda } .
In general, a geometric lattice is an infinite, regular array of vertices (points) in space, which can be modelled vectorially as a Bravais lattice . Some lattices may be skew, which means that their primary lines may not necessarily be at right angles. In reciprocal space, a reciprocal lattice is defined as the set of wavevectors k {\displaystyle \mathbf {k} } of plane waves in the Fourier series of any function f ( r ) {\displaystyle f(\mathbf {r} )} whose periodicity is compatible with that of an initial direct lattice in real space. Equivalently, a wavevector is a vertex of the reciprocal lattice if it corresponds to a plane wave in real space whose phase at any given time is the same (actually differs by ( 2 π ) n {\displaystyle (2\pi )n} with an integer n {\displaystyle n} ) at every direct lattice vertex.
One heuristic approach to constructing the reciprocal lattice in three dimensions is to write the position vector of a vertex of the direct lattice as R = n 1 a 1 + n 2 a 2 + n 3 a 3 {\displaystyle \mathbf {R} =n_{1}\mathbf {a} _{1}+n_{2}\mathbf {a} _{2}+n_{3}\mathbf {a} _{3}} , where the n i {\displaystyle n_{i}} are integers defining the vertex and the a i {\displaystyle \mathbf {a} _{i}} are linearly independent primitive translation vectors (or shortly called primitive vectors) that are characteristic of the lattice. There is then a unique plane wave (up to a factor of negative one), whose wavefront through the origin R = 0 {\displaystyle \mathbf {R} =0} contains the direct lattice points at a 2 {\displaystyle \mathbf {a} _{2}} and a 3 {\displaystyle \mathbf {a} _{3}} , and with its adjacent wavefront (whose phase differs by 2 π {\displaystyle 2\pi } or − 2 π {\displaystyle -2\pi } from the former wavefront passing the origin) passing through a 1 {\displaystyle \mathbf {a} _{1}} . Its angular wavevector takes the form b 1 = 2 π e 1 / λ 1 {\displaystyle \mathbf {b} _{1}=2\pi \mathbf {e} _{1}/\lambda _{1}} , where e 1 {\displaystyle \mathbf {e} _{1}} is the unit vector perpendicular to these two adjacent wavefronts and the wavelength λ 1 {\displaystyle \lambda _{1}} must satisfy λ 1 = a 1 ⋅ e 1 {\displaystyle \lambda _{1}=\mathbf {a} _{1}\cdot \mathbf {e} _{1}} , means that λ 1 {\displaystyle \lambda _{1}} is equal to the distance between the two wavefronts. Hence by construction a 1 ⋅ b 1 = 2 π {\displaystyle \mathbf {a} _{1}\cdot \mathbf {b} _{1}=2\pi } and a 2 ⋅ b 1 = a 3 ⋅ b 1 = 0 {\displaystyle \mathbf {a} _{2}\cdot \mathbf {b} _{1}=\mathbf {a} _{3}\cdot \mathbf {b} _{1}=0} .
Cycling through the indices in turn, the same method yields three wavevectors b j {\displaystyle \mathbf {b} _{j}} with a i ⋅ b j = 2 π δ i j {\displaystyle \mathbf {a} _{i}\cdot \mathbf {b} _{j}=2\pi \,\delta _{ij}} , where the Kronecker delta δ i j {\displaystyle \delta _{ij}} equals one when i = j {\displaystyle i=j} and is zero otherwise. The b j {\displaystyle \mathbf {b} _{j}} comprise a set of three primitive wavevectors or three primitive translation vectors for the reciprocal lattice, each of whose vertices takes the form G = m 1 b 1 + m 2 b 2 + m 3 b 3 {\displaystyle \mathbf {G} =m_{1}\mathbf {b} _{1}+m_{2}\mathbf {b} _{2}+m_{3}\mathbf {b} _{3}} , where the m j {\displaystyle m_{j}} are integers. The reciprocal lattice is also a Bravais lattice as it is formed by integer combinations of the primitive vectors, that are b 1 {\displaystyle \mathbf {b} _{1}} , b 2 {\displaystyle \mathbf {b} _{2}} , and b 3 {\displaystyle \mathbf {b} _{3}} in this case. Simple algebra then shows that, for any plane wave with a wavevector G {\displaystyle \mathbf {G} } on the reciprocal lattice, the total phase shift G ⋅ R {\displaystyle \mathbf {G} \cdot \mathbf {R} } between the origin and any point R {\displaystyle \mathbf {R} } on the direct lattice is a multiple of 2 π {\displaystyle 2\pi } (that can be possibly zero if the multiplier is zero), so the phase of the plane wave with G {\displaystyle \mathbf {G} } will essentially be equal for every direct lattice vertex, in conformity with the reciprocal lattice definition above. (Although any wavevector G {\displaystyle \mathbf {G} } on the reciprocal lattice does always take this form, this derivation is motivational, rather than rigorous, because it has omitted the proof that no other possibilities exist.)
The Brillouin zone is a primitive cell (more specifically a Wigner–Seitz cell ) of the reciprocal lattice, which plays an important role in solid state physics due to Bloch's theorem . In pure mathematics , the dual space of linear forms and the dual lattice provide more abstract generalizations of reciprocal space and the reciprocal lattice.
Assuming a three-dimensional Bravais lattice and labelling each lattice vector (a vector indicating a lattice point) by the subscript n = ( n 1 , n 2 , n 3 ) {\displaystyle n=(n_{1},n_{2},n_{3})} as 3-tuple of integers,
where Z {\displaystyle \mathbb {Z} } is the set of integers and a i {\displaystyle \mathbf {a} _{i}} is a primitive translation vector or shortly primitive vector. Taking a function f ( r ) {\displaystyle f(\mathbf {r} )} where r {\displaystyle \mathbf {r} } is a position vector from the origin R n = 0 {\displaystyle \mathbf {R} _{n}=0} to any position, if f ( r ) {\displaystyle f(\mathbf {r} )} follows the periodicity of this lattice, e.g. the function describing the electronic density in an atomic crystal, it is useful to write f ( r ) {\displaystyle f(\mathbf {r} )} as a multi-dimensional Fourier series [ broken anchor ]
where now the subscript m = ( m 1 , m 2 , m 3 ) {\displaystyle m=(m_{1},m_{2},m_{3})} , so this is a triple sum.
As f ( r ) {\displaystyle f(\mathbf {r} )} follows the periodicity of the lattice, translating r {\displaystyle \mathbf {r} } by any lattice vector R n {\displaystyle \mathbf {R} _{n}} we get the same value, hence
Expressing the above instead in terms of their Fourier series we have ∑ m f m e i G m ⋅ r = ∑ m f m e i G m ⋅ ( r + R n ) = ∑ m f m e i G m ⋅ R n e i G m ⋅ r . {\displaystyle \sum _{m}f_{m}e^{i\mathbf {G} _{m}\cdot \mathbf {r} }=\sum _{m}f_{m}e^{i\mathbf {G} _{m}\cdot (\mathbf {r} +\mathbf {R} _{n})}=\sum _{m}f_{m}e^{i\mathbf {G} _{m}\cdot \mathbf {R} _{n}}\,e^{i\mathbf {G} _{m}\cdot \mathbf {r} }.}
Because equality of two Fourier series implies equality of their coefficients, e i G m ⋅ R n = 1 {\displaystyle e^{i\mathbf {G} _{m}\cdot \mathbf {R} _{n}}=1} , which only holds when
Mathematically, the reciprocal lattice is the set of all vectors G m {\displaystyle \mathbf {G} _{m}} , that are wavevectors of plane waves in the Fourier series of a spatial function whose periodicity is the same as that of a direct lattice as the set of all direct lattice point position vectors R n {\displaystyle \mathbf {R} _{n}} , and G m {\displaystyle \mathbf {G} _{m}} satisfy this equality for all R n {\displaystyle \mathbf {R} _{n}} . Each plane wave in the Fourier series has the same phase (actually can be differed by a multiple of 2 π {\displaystyle 2\pi } ) at all the lattice point R n {\displaystyle \mathbf {R} _{n}} .
As shown in the section multi-dimensional Fourier series [ broken anchor ] , G m {\displaystyle \mathbf {G} _{m}} can be chosen in the form of G m = m 1 b 1 + m 2 b 2 + m 3 b 3 {\displaystyle \mathbf {G} _{m}=m_{1}\mathbf {b} _{1}+m_{2}\mathbf {b} _{2}+m_{3}\mathbf {b} _{3}} where a i ⋅ b j = 2 π δ i j {\displaystyle \mathbf {a} _{i}\cdot \mathbf {b} _{j}=2\pi \,\delta _{ij}} . With this form, the reciprocal lattice as the set of all wavevectors G m {\displaystyle \mathbf {G} _{m}} for the Fourier series of a spatial function which periodicity follows R n {\displaystyle \mathbf {R} _{n}} , is itself a Bravais lattice as it is formed by integer combinations of its own primitive translation vectors ( b 1 , b 2 , b 3 ) {\displaystyle \left(\mathbf {b_{1}} ,\mathbf {b} _{2},\mathbf {b} _{3}\right)} , and the reciprocal of the reciprocal lattice is the original lattice, which reveals the Pontryagin duality of their respective vector spaces . (There may be other form of G m {\displaystyle \mathbf {G} _{m}} . Any valid form of G m {\displaystyle \mathbf {G} _{m}} results in the same reciprocal lattice.)
For an infinite two-dimensional lattice, defined by its primitive vectors ( a 1 , a 2 ) {\displaystyle \left(\mathbf {a} _{1},\mathbf {a} _{2}\right)} , its reciprocal lattice can be determined by generating its two reciprocal primitive vectors, through the following formulae,
where m i {\displaystyle m_{i}} is an integer and
Here Q {\displaystyle \mathbf {Q} } represents a 90 degree rotation matrix , i.e. a q uarter turn. The anti-clockwise rotation and the clockwise rotation can both be used to determine the reciprocal lattice: If Q {\displaystyle \mathbf {Q} } is the anti-clockwise rotation and Q ′ {\displaystyle \mathbf {Q'} } is the clockwise rotation, Q v = − Q ′ v {\displaystyle \mathbf {Q} \,\mathbf {v} =-\mathbf {Q'} \,\mathbf {v} } for all vectors v {\displaystyle \mathbf {v} } . Thus, using the permutation
we obtain
Notably, in a 3D space this 2D reciprocal lattice is an infinitely extended set of Bragg rods—described by Sung et al. [ 1 ]
For an infinite three-dimensional lattice R n = n 1 a 1 + n 2 a 2 + n 3 a 3 {\displaystyle \mathbf {R} _{n}=n_{1}\mathbf {a} _{1}+n_{2}\mathbf {a} _{2}+n_{3}\mathbf {a} _{3}} , defined by its primitive vectors ( a 1 , a 2 , a 3 ) {\displaystyle \left(\mathbf {a_{1}} ,\mathbf {a} _{2},\mathbf {a} _{3}\right)} and the subscript of integers n = ( n 1 , n 2 , n 3 ) {\displaystyle n=\left(n_{1},n_{2},n_{3}\right)} , its reciprocal lattice G m = m 1 b 1 + m 2 b 2 + m 3 b 3 {\displaystyle \mathbf {G} _{m}=m_{1}\mathbf {b} _{1}+m_{2}\mathbf {b} _{2}+m_{3}\mathbf {b} _{3}} with the integer subscript m = ( m 1 , m 2 , m 3 ) {\displaystyle m=(m_{1},m_{2},m_{3})} can be determined by generating its three reciprocal primitive vectors ( b 1 , b 2 , b 3 ) {\displaystyle \left(\mathbf {b_{1}} ,\mathbf {b} _{2},\mathbf {b} _{3}\right)} b 1 = 2 π V a 2 × a 3 b 2 = 2 π V a 3 × a 1 b 3 = 2 π V a 1 × a 2 {\displaystyle {\begin{aligned}\mathbf {b} _{1}&={\frac {2\pi }{V}}\ \mathbf {a} _{2}\times \mathbf {a} _{3}\\[8pt]\mathbf {b} _{2}&={\frac {2\pi }{V}}\ \mathbf {a} _{3}\times \mathbf {a} _{1}\\[8pt]\mathbf {b} _{3}&={\frac {2\pi }{V}}\ \mathbf {a} _{1}\times \mathbf {a} _{2}\end{aligned}}} where V = a 1 ⋅ ( a 2 × a 3 ) = a 2 ⋅ ( a 3 × a 1 ) = a 3 ⋅ ( a 1 × a 2 ) {\displaystyle V=\mathbf {a} _{1}\cdot \left(\mathbf {a} _{2}\times \mathbf {a} _{3}\right)=\mathbf {a} _{2}\cdot \left(\mathbf {a} _{3}\times \mathbf {a} _{1}\right)=\mathbf {a} _{3}\cdot \left(\mathbf {a} _{1}\times \mathbf {a} _{2}\right)} is the scalar triple product . The choice of these ( b 1 , b 2 , b 3 ) {\displaystyle \left(\mathbf {b_{1}} ,\mathbf {b} _{2},\mathbf {b} _{3}\right)} is to satisfy a i ⋅ b j = 2 π δ i j {\displaystyle \mathbf {a} _{i}\cdot \mathbf {b} _{j}=2\pi \,\delta _{ij}} as the known condition (There may be other condition.) of primitive translation vectors for the reciprocal lattice derived in the heuristic approach above and the section multi-dimensional Fourier series [ broken anchor ] . This choice also satisfies the requirement of the reciprocal lattice e i G m ⋅ R n = 1 {\displaystyle e^{i\mathbf {G} _{m}\cdot \mathbf {R} _{n}}=1} mathematically derived above . Using column vector representation of (reciprocal) primitive vectors, the formulae above can be rewritten using matrix inversion :
This method appeals to the definition, and allows generalization to arbitrary dimensions. The cross product formula dominates introductory materials on crystallography.
The above definition is called the "physics" definition, as the factor of 2 π {\displaystyle 2\pi } comes naturally from the study of periodic structures. An essentially equivalent definition, the "crystallographer's" definition, comes from defining the reciprocal lattice K m = G m / 2 π {\displaystyle \mathbf {K} _{m}=\mathbf {G} _{m}/2\pi } . which changes the reciprocal primitive vectors to be
and so on for the other primitive vectors. The crystallographer's definition has the advantage that the definition of b 1 {\displaystyle \mathbf {b} _{1}} is just the reciprocal magnitude of a 1 {\displaystyle \mathbf {a} _{1}} in the direction of a 2 × a 3 {\displaystyle \mathbf {a} _{2}\times \mathbf {a} _{3}} , dropping the factor of 2 π {\displaystyle 2\pi } . This can simplify certain mathematical manipulations, and expresses reciprocal lattice dimensions in units of spatial frequency . It is a matter of taste which definition of the lattice is used, as long as the two are not mixed.
m = ( m 1 , m 2 , m 3 ) {\displaystyle m=(m_{1},m_{2},m_{3})} is conventionally written as ( h , k , ℓ ) {\displaystyle (h,k,\ell )} or ( h k ℓ ) {\displaystyle (hk\ell )} , called Miller indices ; m 1 {\displaystyle m_{1}} is replaced with h {\displaystyle h} , m 2 {\displaystyle m_{2}} replaced with k {\displaystyle k} , and m 3 {\displaystyle m_{3}} replaced with ℓ {\displaystyle \ell } . Each lattice point ( h k ℓ ) {\displaystyle (hk\ell )} in the reciprocal lattice corresponds to a set of lattice planes ( h k ℓ ) {\displaystyle (hk\ell )} in the real space lattice. (A lattice plane is a plane crossing lattice points.) The direction of the reciprocal lattice vector corresponds to the normal to the real space planes. The magnitude of the reciprocal lattice vector K m {\displaystyle \mathbf {K} _{m}} is given in reciprocal length and is equal to the reciprocal of the interplanar spacing of the real space planes.
The formula for n {\displaystyle n} dimensions can be derived assuming an n {\displaystyle n} - dimensional real vector space V {\displaystyle V} with a basis ( a 1 , … , a n ) {\displaystyle (\mathbf {a} _{1},\ldots ,\mathbf {a} _{n})} and an inner product g : V × V → R {\displaystyle g\colon V\times V\to \mathbf {R} } . The reciprocal lattice vectors are uniquely determined by the formula g ( a i , b j ) = 2 π δ i j {\displaystyle g(\mathbf {a} _{i},\mathbf {b} _{j})=2\pi \delta _{ij}} . Using the permutation
they can be determined with the following formula:
Here, ω : V n → R {\displaystyle \omega \colon V^{n}\to \mathbf {R} } is the volume form , g − 1 {\displaystyle g^{-1}} is the inverse of the vector space isomorphism g ^ : V → V ∗ {\displaystyle {\hat {g}}\colon V\to V^{*}} defined by g ^ ( v ) ( w ) = g ( v , w ) {\displaystyle {\hat {g}}(v)(w)=g(v,w)} and ⌟ {\displaystyle \lrcorner } denotes the inner multiplication .
One can verify that this formula is equivalent to the known formulas for the two- and three-dimensional case by using the following facts: In three dimensions, ω ( u , v , w ) = g ( u × v , w ) {\displaystyle \omega (u,v,w)=g(u\times v,w)} and in two dimensions, ω ( v , w ) = g ( R v , w ) {\displaystyle \omega (v,w)=g(Rv,w)} , where R ∈ SO ( 2 ) ⊂ L ( V , V ) {\displaystyle R\in {\text{SO}}(2)\subset L(V,V)} is the rotation by 90 degrees (just like the volume form, the angle assigned to a rotation depends on the choice of orientation [ 2 ] ).
Reciprocal lattices for the cubic crystal system are as follows.
The simple cubic Bravais lattice , with cubic primitive cell of side a {\displaystyle a} , has for its reciprocal a simple cubic lattice with a cubic primitive cell of side 2 π a {\textstyle {\frac {2\pi }{a}}} (or 1 a {\textstyle {\frac {1}{a}}} in the crystallographer's definition). The cubic lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in real space.
The reciprocal lattice to an FCC lattice is the body-centered cubic (BCC) lattice, with a cube side of 4 π a {\textstyle {\frac {4\pi }{a}}} .
Consider an FCC compound unit cell. Locate a primitive unit cell of the FCC; i.e., a unit cell with one lattice point. Now take one of the vertices of the primitive unit cell as the origin. Give the basis vectors of the real lattice. Then from the known formulae, you can calculate the basis vectors of the reciprocal lattice. These reciprocal lattice vectors of the FCC represent the basis vectors of a BCC real lattice. The basis vectors of a real BCC lattice and the reciprocal lattice of an FCC resemble each other in direction but not in magnitude.
The reciprocal lattice to a BCC lattice is the FCC lattice, with a cube side of 4 π / a {\textstyle 4\pi /a} .
It can be proven that only the Bravais lattices which have 90 degrees between ( a 1 , a 2 , a 3 ) {\displaystyle \left(\mathbf {a} _{1},\mathbf {a} _{2},\mathbf {a} _{3}\right)} (cubic, tetragonal, orthorhombic) have primitive translation vectors for the reciprocal lattice, ( b 1 , b 2 , b 3 ) {\displaystyle \left(\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3}\right)} , parallel to their real-space vectors.
The reciprocal to a simple hexagonal Bravais lattice with lattice constants a {\textstyle a} and c {\textstyle c} is another simple hexagonal lattice with lattice constants 2 π / c {\textstyle 2\pi /c} and 4 π / ( a 3 ) {\textstyle 4\pi /(a{\sqrt {3}})} rotated through 90° about the c axis with respect to the direct lattice. The simple hexagonal lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in real space. Primitive translation vectors for this simple hexagonal Bravais lattice vectors are a 1 = 3 2 a x ^ + 1 2 a y ^ , a 2 = − 3 2 a x ^ + 1 2 a y ^ , a 3 = c z ^ . {\displaystyle {\begin{aligned}a_{1}&={\frac {\sqrt {3}}{2}}a{\hat {x}}+{\frac {1}{2}}a{\hat {y}},\\[8pt]a_{2}&=-{\frac {\sqrt {3}}{2}}a{\hat {x}}+{\frac {1}{2}}a{\hat {y}},\\[8pt]a_{3}&=c{\hat {z}}.\end{aligned}}} [ 3 ]
One path to the reciprocal lattice of an arbitrary collection of atoms comes from the idea of scattered waves in the Fraunhofer (long-distance or lens back-focal-plane) limit as a Huygens-style sum of amplitudes from all points of scattering (in this case from each individual atom). [ 4 ] This sum is denoted by the complex amplitude F {\displaystyle F} in the equation below, because it is also the Fourier transform (as a function of spatial frequency or reciprocal distance) of an effective scattering potential in direct space:
Here g = q /(2 π ) is the scattering vector q in crystallographer units, N is the number of atoms, f j [ g ] is the atomic scattering factor for atom j and scattering vector g , while r j is the vector position of atom j . The Fourier phase depends on one's choice of coordinate origin.
For the special case of an infinite periodic crystal, the scattered amplitude F = M F h,k,ℓ from M unit cells (as in the cases above) turns out to be non-zero only for integer values of ( h , k , ℓ ) {\displaystyle (h,k,\ell )} , where
when there are j = 1, m atoms inside the unit cell whose fractional lattice indices are respectively { u j , v j , w j }. To consider effects due to finite crystal size, of course, a shape convolution for each point or the equation above for a finite lattice must be used instead.
Whether the array of atoms is finite or infinite, one can also imagine an "intensity reciprocal lattice" I[ g ], which relates to the amplitude lattice F via the usual relation I = F * F where F * is the complex conjugate of F. Since Fourier transformation is reversible, of course, this act of conversion to intensity tosses out "all except 2nd moment" (i.e. the phase) information. For the case of an arbitrary collection of atoms, the intensity reciprocal lattice is therefore:
Here r jk is the vector separation between atom j and atom k . One can also use this to predict the effect of nano-crystallite shape, and subtle changes in beam orientation, on detected diffraction peaks even if in some directions the cluster is only one atom thick. On the down side, scattering calculations using the reciprocal lattice basically consider an incident plane wave. Thus after a first look at reciprocal lattice (kinematic scattering) effects, beam broadening and multiple scattering (i.e. dynamical ) effects may be important to consider as well.
There are actually two versions in mathematics of the abstract dual lattice concept, for a given lattice L in a real vector space V , of finite dimension .
The first, which generalises directly the reciprocal lattice construction, uses Fourier analysis . It may be stated simply in terms of Pontryagin duality . The dual group V ^ to V is again a real vector space, and its closed subgroup L ^ dual to L turns out to be a lattice in V ^. Therefore, L ^ is the natural candidate for dual lattice , in a different vector space (of the same dimension).
The other aspect is seen in the presence of a quadratic form Q on V ; if it is non-degenerate it allows an identification of the dual space V * of V with V . The relation of V * to V is not intrinsic; it depends on a choice of Haar measure (volume element) on V . But given an identification of the two, which is in any case well-defined up to a scalar , the presence of Q allows one to speak to the dual lattice to L while staying within V .
In mathematics, the dual lattice of a given lattice L in an abelian locally compact topological group G is the subgroup L ∗ of the dual group of G consisting of all continuous characters that are equal to one at each point of L .
In discrete mathematics , a lattice is a locally discrete set of points described by all integral linear combinations of dim = n linearly independent vectors in R n . The dual lattice is then defined by all points in the linear span of the original lattice (typically all of R n ) with the property that an integer results from the inner product with all elements of the original lattice. It follows that the dual of the dual lattice is the original lattice.
Furthermore, if we allow the matrix B to have columns as the linearly independent vectors that describe the lattice, then the matrix A = B ( B T B ) − 1 {\displaystyle A=B\left(B^{\mathsf {T}}B\right)^{-1}} has columns of vectors that describe the dual lattice.
In quantum physics , reciprocal space is closely related to momentum space according to the proportionality p = ℏ k {\displaystyle \mathbf {p} =\hbar \mathbf {k} } , where p {\displaystyle \mathbf {p} } is the momentum vector and ℏ {\displaystyle \hbar } is the reduced Planck constant . | https://en.wikipedia.org/wiki/Reciprocal_lattice |
In algebra , given a polynomial
with coefficients from an arbitrary field , its reciprocal polynomial or reflected polynomial , [ 1 ] [ 2 ] denoted by p ∗ or p R , [ 2 ] [ 1 ] is the polynomial [ 3 ]
That is, the coefficients of p ∗ are the coefficients of p in reverse order. Reciprocal polynomials arise naturally in linear algebra as the characteristic polynomial of the inverse of a matrix .
In the special case where the field is the complex numbers , when
the conjugate reciprocal polynomial , denoted p † , is defined by,
where a i ¯ {\displaystyle {\overline {a_{i}}}} denotes the complex conjugate of a i {\displaystyle a_{i}} , and is also called the reciprocal polynomial when no confusion can arise.
A polynomial p is called self-reciprocal or palindromic if p ( x ) = p ∗ ( x ) .
The coefficients of a self-reciprocal polynomial satisfy a i = a n − i for all i .
Reciprocal polynomials have several connections with their original polynomials, including:
Other properties of reciprocal polynomials may be obtained, for instance:
A self-reciprocal polynomial is also called palindromic because its coefficients, when the polynomial is written in the order of ascending or descending powers, form a palindrome . That is, if
is a polynomial of degree n , then P is palindromic if a i = a n − i for i = 0, 1, ..., n .
Similarly, a polynomial P of degree n is called antipalindromic if a i = − a n − i for i = 0, 1, ..., n . That is, a polynomial P is antipalindromic if P ( x ) = – P ∗ ( x ) .
From the properties of the binomial coefficients , it follows that the polynomials P ( x ) = ( x + 1) n are palindromic for all positive integers n , while the polynomials Q ( x ) = ( x – 1) n are palindromic when n is even and antipalindromic when n is odd .
Other examples of palindromic polynomials include cyclotomic polynomials and Eulerian polynomials .
A polynomial with real coefficients all of whose complex roots lie on the unit circle in the complex plane (that is, all the roots have modulus 1) is either palindromic or antipalindromic. [ 10 ]
A polynomial is conjugate reciprocal if p ( x ) ≡ p † ( x ) {\displaystyle p(x)\equiv p^{\dagger }(x)} and self-inversive if p ( x ) = ω p † ( x ) {\displaystyle p(x)=\omega p^{\dagger }(x)} for a scale factor ω on the unit circle . [ 11 ]
If p ( z ) is the minimal polynomial of z 0 with | z 0 | = 1, z 0 ≠ 1 , and p ( z ) has real coefficients, then p ( z ) is self-reciprocal. This follows because
So z 0 is a root of the polynomial z n p ( z ¯ − 1 ) ¯ {\displaystyle z^{n}{\overline {p({\bar {z}}^{-1})}}} which has degree n . But, the minimal polynomial is unique, hence
for some constant c , i.e. c a i = a n − i ¯ = a n − i {\displaystyle ca_{i}={\overline {a_{n-i}}}=a_{n-i}} . Sum from i = 0 to n and note that 1 is not a root of p . We conclude that c = 1 .
A consequence is that the cyclotomic polynomials Φ n are self-reciprocal for n > 1 . This is used in the special number field sieve to allow numbers of the form x 11 ± 1, x 13 ± 1, x 15 ± 1 and x 21 ± 1 to be factored taking advantage of the algebraic factors by using polynomials of degree 5, 6, 4 and 6 respectively – note that φ ( Euler's totient function ) of the exponents are 10, 12, 8 and 12. [ citation needed ]
Per Cohn's theorem , a self-inversive polynomial has as many roots in the unit disk { z ∈ C : | z | < 1 } {\displaystyle \{z\in \mathbb {C} :|z|<1\}} as the reciprocal polynomial of its derivative . [ 12 ] [ 13 ]
The reciprocal polynomial finds a use in the theory of cyclic error correcting codes . Suppose x n − 1 can be factored into the product of two polynomials, say x n − 1 = g ( x ) p ( x ) . When g ( x ) generates a cyclic code C , then the reciprocal polynomial p ∗ generates C ⊥ , the orthogonal complement of C . [ 14 ] Also, C is self-orthogonal (that is, C ⊆ C ⊥ ) , if and only if p ∗ divides g ( x ) . [ 15 ] | https://en.wikipedia.org/wiki/Reciprocal_polynomial |
In calculus , the reciprocal rule gives the derivative of the reciprocal of a function f in terms of the derivative of f . The reciprocal rule can be used to show that the power rule holds for negative exponents if it has already been established for positive exponents. Also, one can readily deduce the quotient rule from the reciprocal rule and the product rule .
The reciprocal rule states that if f is differentiable at a point x and f ( x ) ≠ 0 then g( x ) = 1/ f ( x ) is also differentiable at x and
g ′ ( x ) = − f ′ ( x ) f ( x ) 2 . {\displaystyle g'(x)=-{\frac {f'(x)}{f(x)^{2}}}.}
This proof relies on the premise that f {\displaystyle f} is differentiable at x , {\displaystyle x,} and on the theorem that f {\displaystyle f} is then also necessarily continuous there. Applying the definition of the derivative of g {\displaystyle g} at x {\displaystyle x} with f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} gives g ′ ( x ) = d d x ( 1 f ( x ) ) = lim h → 0 ( 1 f ( x + h ) − 1 f ( x ) h ) = lim h → 0 ( f ( x ) − f ( x + h ) h ⋅ f ( x ) ⋅ f ( x + h ) ) = lim h → 0 ( − ( ( f ( x + h ) − f ( x ) h ) ⋅ ( 1 f ( x ) ⋅ f ( x + h ) ) ) {\displaystyle {\begin{aligned}g'(x)={\frac {d}{dx}}\left({\frac {1}{f(x)}}\right)&=\lim _{h\to 0}\left({\frac {{\frac {1}{f(x+h)}}-{\frac {1}{f(x)}}}{h}}\right)\\&=\lim _{h\to 0}\left({\frac {f(x)-f(x+h)}{h\cdot f(x)\cdot f(x+h)}}\right)\\&=\lim _{h\to 0}\left(-({\frac {(f(x+h)-f(x)}{h}})\cdot ({\frac {1}{f(x)\cdot f(x+h)}})\right)\end{aligned}}} The limit of this product exists and is equal to the product of the existing limits of its factors: ( − lim h → 0 f ( x + h ) − f ( x ) h ) ⋅ ( lim h → 0 1 f ( x ) ⋅ f ( x + h ) ) {\displaystyle \left(-\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\right)\cdot \left(\lim _{h\to 0}{\frac {1}{f(x)\cdot f(x+h)}}\right)} Because of the differentiability of f {\displaystyle f} at x {\displaystyle x} the first limit equals − f ′ ( x ) , {\displaystyle -f'(x),} and because of f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} and the continuity of f {\displaystyle f} at x {\displaystyle x} the second limit equals 1 / f ( x ) 2 , {\displaystyle 1/f(x)^{2},} thus yielding g ′ ( x ) = − f ′ ( x ) ⋅ 1 f ( x ) 2 = − f ′ ( x ) f ( x ) 2 {\displaystyle g'(x)=-f'(x)\cdot {\frac {1}{f(x)^{2}}}=-{\frac {f'(x)}{f(x)^{2}}}}
It may be argued that since
f ( x ) ⋅ 1 f ( x ) = 1 , {\displaystyle f(x)\cdot {\frac {1}{f(x)}}=1,}
an application of the product rule says that
f ′ ( x ) ( 1 f ) ( x ) + f ( x ) ( 1 f ) ′ ( x ) = 0 , {\displaystyle f'(x)\left({\frac {1}{f}}\right)(x)+f(x)\left({\frac {1}{f}}\right)'(x)=0,}
and this may be algebraically rearranged to say
( 1 f ) ′ ( x ) = − f ′ ( x ) f ( x ) 2 . {\displaystyle \left({\frac {1}{f}}\right)'(x)={\frac {-f'(x)}{f(x)^{2}}}.}
However, this fails to prove that 1/ f is differentiable at x ; it is valid only when differentiability of 1/ f at x is already established. In that way, it is a weaker result than the reciprocal rule proved above. However, in the context of differential algebra , in which there is nothing that is not differentiable and in which derivatives are not defined by limits, it is in this way that the reciprocal rule and the more general quotient rule are established.
Often the power rule, stating that d d x ( x n ) = n x n − 1 {\displaystyle {\tfrac {d}{dx}}(x^{n})=nx^{n-1}} , is proved by methods that are valid only when n is a nonnegative integer. This can be extended to negative integers n by letting n = − m {\displaystyle n=-m} , where m is a positive integer.
d d x x n = d d x ( 1 x m ) = − d d x x m ( x m ) 2 , by the reciprocal rule = − m x m − 1 x 2 m , by the power rule applied to the positive integer m , = − m x − m − 1 = n x n − 1 , by substituting back n = − m . {\displaystyle {\begin{aligned}{\frac {d}{dx}}x^{n}&={\frac {d}{dx}}\,\left({\frac {1}{x^{m}}}\right)\\&=-{\frac {{\frac {d}{dx}}x^{m}}{(x^{m})^{2}}},{\text{ by the reciprocal rule}}\\&=-{\frac {mx^{m-1}}{x^{2m}}},{\text{ by the power rule applied to the positive integer }}m,\\&=-mx^{-m-1}=nx^{n-1},{\text{ by substituting back }}n=-m.\end{aligned}}}
The reciprocal rule is a special case of the quotient rule, which states that if f and g are differentiable at x and g ( x ) ≠ 0 then
d d x [ f ( x ) g ( x ) ] = g ( x ) f ′ ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 . {\displaystyle {\frac {d}{dx}}\,\left[{\frac {f(x)}{g(x)}}\right]={\frac {g(x)f\,'(x)-f(x)g'(x)}{[g(x)]^{2}}}.}
The quotient rule can be proved by writing
f ( x ) g ( x ) = f ( x ) ⋅ 1 g ( x ) {\displaystyle {\frac {f(x)}{g(x)}}=f(x)\cdot {\frac {1}{g(x)}}}
and then first applying the product rule, and then applying the reciprocal rule to the second factor.
d d x [ f ( x ) g ( x ) ] = d d x [ f ( x ) ⋅ 1 g ( x ) ] = f ′ ( x ) ⋅ 1 g ( x ) + f ( x ) ⋅ d d x [ 1 g ( x ) ] = f ′ ( x ) ⋅ 1 g ( x ) + f ( x ) ⋅ [ − g ′ ( x ) g ( x ) 2 ] = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 = f ′ ( x ) g ( x ) − f ( x ) g ′ ( x ) [ g ( x ) ] 2 . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\left[{\frac {f(x)}{g(x)}}\right]&={\frac {d}{dx}}\left[f(x)\cdot {\frac {1}{g(x)}}\right]\\&=f'(x)\cdot {\frac {1}{g(x)}}+f(x)\cdot {\frac {d}{dx}}\left[{\frac {1}{g(x)}}\right]\\&=f'(x)\cdot {\frac {1}{g(x)}}+f(x)\cdot \left[{\frac {-g'(x)}{g(x)^{2}}}\right]\\&={\frac {f'(x)}{g(x)}}-{\frac {f(x)g'(x)}{[g(x)]^{2}}}\\&={\frac {f'(x)g(x)-f(x)g'(x)}{[g(x)]^{2}}}.\end{aligned}}}
By using the reciprocal rule one can find the derivative of the secant and cosecant functions.
For the secant function:
d d x sec x = d d x ( 1 cos x ) = − d d x cos x cos 2 x = sin x cos 2 x = 1 cos x ⋅ sin x cos x = sec x tan x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\sec x&={\frac {d}{dx}}\,\left({\frac {1}{\cos x}}\right)={\frac {-{\frac {d}{dx}}\cos x}{\cos ^{2}x}}={\frac {\sin x}{\cos ^{2}x}}={\frac {1}{\cos x}}\cdot {\frac {\sin x}{\cos x}}=\sec x\tan x.\end{aligned}}}
The cosecant is treated similarly:
d d x csc x = d d x ( 1 sin x ) = − d d x sin x sin 2 x = − cos x sin 2 x = − 1 sin x ⋅ cos x sin x = − csc x cot x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\csc x&={\frac {d}{dx}}\,\left({\frac {1}{\sin x}}\right)={\frac {-{\frac {d}{dx}}\sin x}{\sin ^{2}x}}=-{\frac {\cos x}{\sin ^{2}x}}=-{\frac {1}{\sin x}}\cdot {\frac {\cos x}{\sin x}}=-\csc x\cot x.\end{aligned}}} | https://en.wikipedia.org/wiki/Reciprocal_rule |
Reciprocal silencing , a genetic phenomenon that primarily occurs in plants, refers to the pattern of redundant genes being silenced following a polyploid event. Polyploidy (wholesale genome duplication) is common in plants and constitutes an important method of speciation. [ 1 ] When a polyploid species arises, its genome contains homoeologs , duplicated chromosomes with equivalent genetic information. However silencing of redundant genes occurs rapidly in new polyploids through genetic and epigenetic means. This primarily occurs because redundancy allows one of the two genes present for each locus to be silenced without affecting the phenotype of the organism, and thus mutations that eliminate gene expression are much less likely to be deleterious or lethal. [ 1 ] [ 2 ] This allows mutations that would be lethal in diploid populations to accumulate in polyploids. Reciprocal silencing refers to the specific pattern of silencing where equivalent loci in are both silenced and expressed in a reciprocal manner. This phenomenon is observed on two distinct scales.
Allopolyploids are species whose increased complement of genetic material is the result of hybridization of two closely related species . Thus homeologous chromosomes in allopolyploids are equivalent, but not identical. These differences mean that the precise pattern of silencing and expression can have important phenotypic effects. Reciprocal silencing on the population level refers to the case where two populations are each descended from the same allopolyploid . In one population, one of the two equivalent loci (A) is expressed while the other (B) has been silenced, while in the other population the reciprocal pattern occurs, with B being expressed and A silenced. It is important to note that this refers to equivalent loci, specific locations within the genome, rather than the entire homeologous chromosome .
Reciprocal silencing on the population level has been proposed as a means of allopatric speciation following a polyploid event. [ 1 ] Allopatric speciation occurs when two populations of the same species become spatially separated and accumulate enough genetic differences to lose the ability to interbreed. As redundant genes are silenced in allopolyploids there is the potential for rapid genetic differences to accumulate through reciprocal silencing. These differences can lead to the loss of ability to interbreed between separated populations at a faster rate than other methods of speciation , given the relative speed with which genes are silenced following a polyploid event. Faster still, redundant genes can be silenced through epigenetic means, although the importance of this phenomenon is not fully understood. [ 2 ]
Reciprocal silencing on the tissue level refers to the same pattern of silencing and expression of homeologous loci. However, in this case, the differences in silencing and expression occur between two types of tissue within the same individual, rather than in individuals of different populations. This is an example of neofunctionalization , a process where duplicated genes that were once at equivalent loci evolve to carry out two separate functions. [ 3 ] Since different tissues require different genes to be expressed, reciprocal silencing can occur between tissues . Importantly, while the pattern of gene expression is the same as in the population case, the genetic means by which this pattern is achieved are very different. While silencing mutations are thought to be the main source of reciprocal silencing
at the population level, at the tissue level only epigenetic factors are in play, since expressible copies of both homeologous loci must exist in all cells in an individual if different tissues express different homeologs. | https://en.wikipedia.org/wiki/Reciprocal_silencing |
The reciprocating chemical muscle (RCM) is a mechanism that takes advantage of the superior energy density of chemical reactions . It is a regenerative device that converts chemical energy into motion through a direct noncombustive chemical reaction.
RCM is capable of generating autonomic wing beating from a chemical energy source. It can also be used to provide a small amount of electricity to the onboard control systems. It further helps in differential lift enhancement on the wings to achieve roll, pitch, and hence, steered flight. The RCM technique is particularly useful in the manufacturing of insect-like micro air vehicles . The first generation of RCMs was large and had a reciprocating frequency around 10 Hz. The later generations [ 1 ] developed were very much smaller and lighter. Also, the reciprocating frequency of this generation RCM was as high as 60 Hz. The reciprocating chemical muscle was invented by Prof. Robert C. Michelson of the Georgia Tech Research Institute and implemented up through its fourth generation by Nino Amarena of ETS Laboratories.
Particular benefits of the RCM are:
The reciprocating chemical muscle uses various monopropellants in the presence of specific catalysts to create gas from a liquid without combustion . [ 3 ] This gas is used to drive reciprocating opposing cylinders (in the fourth-generation device) to produce sufficient motion (throw) with sufficient force and frequency to allow flapping-wing flight. As of 2004, the RCM had been demonstrated in the Georgia Tech Research Institute laboratory to achieve sufficient throw, force, and frequency for operation of a 50-gram entomopter while using high concentration (> 90%) hydrogen peroxide in the presence of a proprietary catalyst developed by ETS Laboratories. [ 4 ]
The reciprocating chemical muscle was developed as a drive mechanism for the flapping wings of the entomopter. The RCM reuses energy many times before releasing it into its surroundings. [ 5 ] First, it converts mainly heat energy into flapping-wing motion in the entomopter. Then, heat is scavenged for thermoelectric generation in support of ancillary systems. Waste gas from the chemical decomposition of the fuel is then used to create a frequency modulated continuous wave acoustic ranging signal that is Doppler insensitive (used for obstacle avoidance). Waste gas is then passed through an ejector to entrain external atmospheric gases to increase mass flow and decrease waste gas temperature so that lower-temperature components can be used downstream. Some waste gas is diverted into gas bearings for rotational and linear moving components. Finally, remaining waste gas is vectored into the wings where it is used for circulation-controlled lift augmentation ( Coanda effect ). Any remaining gas can be used for vectored thrust , but if the gas budgets are correctly designed, there should be no extra gas beyond the circulation control points. The features of the RCM are tailored to the entomopter to conserve energy. [ 2 ] | https://en.wikipedia.org/wiki/Reciprocating_Chemical_Muscle |
Reciprocating motion , also called reciprocation , is a repetitive up-and-down or back-and-forth linear motion . It is found in a wide range of mechanisms, including reciprocating engines and pumps . The two opposite motions that comprise a single reciprocation cycle are called strokes . [ citation needed ]
A crank can be used to convert into reciprocating motion, or conversely turn reciprocating motion into circular motion. [ citation needed ] [ 1 ]
For example, inside an internal combustion engine (a type of reciprocating engine), the expansion of burning fuel in the cylinders periodically pushes the piston down, which, through the connecting rod , turns the crankshaft . The continuing rotation of the crankshaft drives the piston back up, ready for the next cycle. The piston moves in a reciprocating motion, which is converted into the
circular motion of the crankshaft, which ultimately propels the vehicle or does other useful work. [ citation needed ]
The reciprocating motion of a pump piston is close to but different from, sinusoidal simple harmonic motion . Assuming the wheel is driven at a perfect constant rotational velocity, the point on the crankshaft which connects to the connecting rod rotates smoothly at a constant velocity in a circle. Thus, the displacement of that point is indeed exactly sinusoidal by definition. However, during the cycle, the angle of the connecting rod changes continuously, so the horizontal displacement of the "far" end of the connecting rod (i.e., connected to the piston) differs slightly from sinusoidal. Additionally, if the wheel is not spinning with perfect constant rotational velocity, such as in a steam locomotive starting up from a stop, the motion will be even less sinusoidal. [ citation needed ]
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reciprocating_motion |
Reciprocity in linear systems is the principle that a response Rab , measured at a location (and direction if applicable) a , when the system has an excitation signal applied at a location (and direction if applicable) b , is exactly equal to Rba which is the response at location b , when that same excitation is applied at a . This applies for all frequencies of the excitation signal. If Hab is the transfer function between a and b then Hab = Hba , if the system is linear.
In the special case of a modal analysis this is known as Maxwell's reciprocity theorem. [ 1 ] In electromagnetism the concept is known as Lorentz reciprocity , a special case of which is the reciprocity theorem of electrical networks.
The reciprocity principle is also used in the analysis of structures. [ 2 ] When combined with superposition , symmetry and anti-symmetry, it can be used to resolve complex load conditions. | https://en.wikipedia.org/wiki/Reciprocity_(engineering) |
Reciprocity in evolutionary biology refers to mechanisms whereby the evolution of cooperative or altruistic behaviour may be favoured by the probability of future mutual interactions. A corollary is how a desire for revenge can harm the collective and therefore be naturally selected against.
Three types of reciprocity have been studied extensively:
Direct reciprocity was proposed by Robert Trivers as a mechanism for the evolution of cooperation. [ 1 ] If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favoured even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the
evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act: [ 2 ] w > c / b
"In the standard framework of indirect reciprocity, there are randomly chosen pairwise encounters between members of a population; the same two individuals need not meet again. One individual acts as donor, the other as recipient. The donor can decide whether or not to cooperate. The interaction is observed by a subset of the population who might inform others. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help." [ 3 ] In many situations cooperation is favoured and it even benefits an individual to forgive an occasional defection but cooperative societies are always unstable because mutants inclined to defect can upset any balance. [ 4 ]
The calculations of indirect reciprocity are complicated, but again a simple rule has emerged. [ 5 ] Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone’s reputation exceeds the cost-to-benefit ratio of the altruistic act:
One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known. [ 6 ]
Individual acts of indirect reciprocity may be classified as "upstream" or "downstream": [ 7 ]
Real populations are not well mixed, but have spatial structures or social networks which imply that some individuals interact more often than others. One approach of capturing this effect is evolutionary graph theory, [ 8 ] in which individuals occupy the vertices of a graph. The edges determine who interacts with whom. If a cooperator pays a cost, c, for each neighbor to receive a benefit, b, and defectors have no costs, and their neighbors receive no benefits, network reciprocity can favor cooperation. [ 9 ] The benefit-to-cost ratio must exceed the average number of people, k, per individual:
Recent work [ 10 ] shows that the benefit-to-cost ratio must exceed the mean degree of nearest neighbors, ⟨k nn ⟩ :
An ethical concept known as "generalized reciprocity" holds that people should show kindness to others without anticipating prompt return favors. [ 11 ] This kind of reciprocity emphasizes the intrinsic value of humanitarian acts and goes beyond transactional expectations. In the field of social dynamics, generalized reciprocity encourages people to have a culture of giving and unity. When people engage in this type of reciprocity, they give without thinking about what they could get back, showing that they care about the general welfare of the community. [ 12 ] It portrays a kind of social connection in where individuals give, share, or assist without anticipating anything in return.
This selfless involvement spreads outside of close circles, creating a domino effect that improves the well-being of everybody. Therefore, generalized reciprocity is evidence of the persistent value of selfless contributions in building strong, cohesive communities. Adopting this idea means being committed to the timeless values of giving and having faith in the natural flow of advantages for both parties. [ 13 ] | https://en.wikipedia.org/wiki/Reciprocity_(evolution) |
Recirculating aquaculture systems ( RAS ) are used in home aquaria and for fish production where water exchange is limited and the use of biofiltration is required to reduce ammonia toxicity. [ 1 ] Other types of filtration and environmental control are often also necessary to maintain clean water and provide a suitable habitat for fish. [ 2 ] The main benefit of RAS is the ability to reduce the need for fresh, clean water while still maintaining a healthy environment for fish. To be operated economically commercial RAS must have high fish stocking densities, and many researchers are currently conducting studies to determine if RAS is a viable form of intensive aquaculture . [ 3 ]
A series of treatment processes is utilized to maintain water quality in intensive fish farming operations. These steps are often done in order or sometimes in tandem. After leaving the vessel holding fish the water is first treated for solids before entering a biofilter to convert ammonia, next degassing and oxygenation occur, often followed by heating/cooling and sterilization. Each of these processes can be completed by using a variety of different methods and equipment, but regardless all must take place to ensure a healthy environment that maximizes fish growth and health. [ citation needed ]
All RAS relies on biofiltration to convert ammonia (NH 4 + and NH 3 ) excreted by the fish into nitrate . [ 4 ] Ammonia is a waste product of fish metabolism and high concentrations (>.02 mg/L) are toxic to most finfish. [ 5 ] Nitrifying bacteria are chemoautotrophs that convert ammonia into nitrite (NO 2 − ) then nitrate (NO 3 − ). These include bacteria of the genera Nitrobacter , Nitrococcus , Nitrospira , and Nitrospina . Although nitrite is usually converted to nitrate as quickly as it is produced, lack of biological oxidation of the nitrite will result in elevated nitrite levels that can be toxic to the fish. High levels of nitrite are also indicative of biofilter impending failure. Nitrate is the end-product of nitrification, and is the least toxic of the nitrogen compounds, with 96-hour exposure LC 50 values in freshwater in excess of 1,000 mg/L. [ 6 ]
A biofilter provides a substrate for the bacterial community, which results in thick biofilm growing within the filter. [ 4 ] Water is pumped through the filter, and ammonia is utilized by the bacteria for energy. In recirculating systems, daily water exchanges are commonly used to control nitrogen levels. Stable environmental conditions and regular maintenance are required to ensure the biofilter is operating efficiently. [ citation needed ]
In addition to treating the liquid waste excreted by fish the solid waste must also be treated, this is done by concentrating and flushing the solids out of the system. [ 7 ] Removing solids reduces bacteria growth, oxygen demand, and the proliferation of disease. The simplest method for removing solids is the creation of settling basin where the relative velocity of the water is slow and particles can settle at the bottom of the tank where they are either flushed out or vacuumed out manually using a siphon. However, this method is not viable for RAS operations where a small footprint is desired. Typical RAS solids removal involves a sand filter or particle filter where solids become lodged and can be periodically backflushed out of the filter. [ 8 ] Another common method is the use of a mechanical drum filter where water is run over a rotating drum screen that is periodically cleaned by pressurized spray nozzles, and the resulting slurry is treated or sent down the drain. In order to remove extremely fine particles or colloidal solids a protein fractionator may be used with or without the addition of ozone (O 3 ). [ citation needed ]
Reoxygenating the system water is a crucial part to obtaining high production densities. Fish require oxygen to metabolize food and grow, as do bacteria communities in the biofilter. Dissolved oxygen levels can be increased through two methods, aeration and oxygenation . In aeration air is pumped through an air stone or similar device that creates small bubbles in the water column, this results in a high surface area where oxygen can dissolve into the water. In general due to slow gas dissolution rates and the high air pressure needed to create small bubbles this method is considered inefficient and the water is instead oxygenated by pumping in pure oxygen. [ 9 ] Various methods are used to ensure that during oxygenation all of the oxygen dissolves into the water column. Careful calculation and consideration must be given to the oxygen demand of a given system, and that demand must be met with either oxygenation or aeration equipment. [ 10 ]
In all RAS pH must be carefully monitored and controlled. The first step of nitrification in the biofilter consumes alkalinity and lowers the pH of the system. [ 11 ] Keeping the pH in a suitable range (5.0-9.0 for freshwater systems) is crucial to maintain the health of both the fish and biofilter. pH is typically controlled by the addition of alkalinity in the form of lime (CaCO 3 ) or sodium hydroxide (NaOH). A low pH will lead to high levels of dissolved carbon dioxide (CO 2 ), which can prove toxic to fish. [ 12 ] pH can also be controlled by degassing CO 2 in a packed column or with an aerator, this is necessary in intensive systems especially where oxygenation instead of aeration is used in tanks to maintain O 2 levels. [ 13 ]
All fish species have a preferred temperature above and below which that fish will experience negative health effects and eventually death. Warm water species such as Tilapia and Barramundi prefer 24 °C water or warmer, where as cold water species such as trout and salmon prefer water temperature below 16 °C. Temperature also plays an important role in dissolved oxygen (DO) concentrations, with higher water temperatures having lower values for DO saturation. Temperature is controlled through the use of submerged heaters, heat pumps , chillers , and heat exchangers . [ 14 ] All four may be used to keep a system operating at the optimal temperature for maximizing fish production.
Disease outbreaks occur more readily when dealing with the high fish stocking densities typically employed in intensive RAS. Outbreaks can be reduced by operating multiple independent systems with the same building and isolating water to water contact between systems by cleaning equipment and personnel that move between systems. [ 15 ] Also the use of an Ultraviolet (UV) or ozone water treatment system reduces the number of free floating virus and bacteria in the system water. These treatment systems reduce the disease loading that occurs on stressed fish and thus reduce the chance of an outbreak. [ citation needed ]
High upfront investment in materials and infrastructure. [ 21 ]
Combining plants and fish in a RAS is referred to as aquaponics. In this type of system ammonia produced by the fish is not only converted to nitrate but is also removed by the plants from the water. [ 23 ] In an aquaponics system fish effectively fertilize the plants, this creates a closed looped system where very little waste is generated and inputs are minimized. Aquaponics provides the advantage of being able to harvest and sell multiple crops. Contradictory views exist on the suitability and safety of RAS effluents to sustain plant growth under aquaponics condition. Future conversions, rather ‘upgrades’, of operational RAS farms to semi-commercial Aquaponic ventures should not be deterred by nutrient insufficiency or nutrient safety arguments. Incentivizing RAS farm wastes through semi-commercial aquaponics is encouraged. Nutrients locked in RAS wastewater and sludge have sufficient and safe nutrients to sustain plant growth under aquaponics condition. [ 24 ]
Home aquaria and inland commercial aquariums are a form of RAS where the water quality is very carefully controlled and the stocking density of fish is relatively low. In these systems the goal is to display the fish rather than producing food. However, biofilters and other forms of water treatment are still used to reduce the need to exchange water and to maintain water clarity. [ 25 ] Just like in traditional RAS water must be removed periodically to prevent nitrate and other toxic chemicals from building up in the system. Coastal aquariums often have high rates of water exchange and are typically not operated as a RAS due to their proximity to a large body of clean water. | https://en.wikipedia.org/wiki/Recirculating_aquaculture_system |
Reclaimed lumber is processed wood retrieved from its original application for purposes of subsequent use. Most reclaimed lumber comes from timbers and decking rescued from old barns , factories and warehouses, although some companies use wood from less traditional structures such as boxcars, coal mines and wine barrels . Reclaimed or antique lumber is used primarily for decoration and home building, for example for siding, architectural details, cabinetry, furniture and flooring.
In the United States of America , wood once functioned as the primary building material because it was strong, relatively inexpensive and abundant. Today, many of the woods that were once plentiful are only available in large quantities through reclamation. One common reclaimed wood, longleaf pine , was used to build factories and warehouses during the Industrial Revolution . The trees were slow-growing (taking 200 to 400 years to mature), tall, straight, and had a natural ability to resist mold and insects. [ 2 ] They were also abundant. Longleaf pine grew in thick forests that spanned over 140,000 square miles (360,000 km 2 ) of North America . [ 3 ] Reclaimed longleaf pine is often sold as Heart Pine, where the word "heart" refers to the heartwood of the tree. [ citation needed ]
Previously common woods for building barns and other structures were redwood (Sequoia sempervirens) on the U.S. west coast and American Chestnut on the U.S. east coast. Beginning in 1904, a chestnut blight spread across the US, killing billions of American Chestnuts, so when these structures were later dismantled, they were a welcome source of this desirable but later rare wood for subsequent reuse. American Chestnut wood can be identified as pre- or post-blight by analysis of worm tracks in sawn timber. The presence of worm tracks suggests the trees were felled as dead standing timber, and may be post-blight lumber.
Barns are one of the most common sources for reclaimed wood in the United States. Those constructed through the early 19th century were typically built using whatever trees were growing on or near the builder's property. They often contain a mix of oak, chestnut, poplar, hickory and pine timber. Beam sizes were limited to what could be moved by man and horse. The wood was often hand-hewn with an axe and/or adze . Early settlers likely recognized American oak from their experience with its European species. Red, white, black, scarlet, willow, post, and pine oak varieties have all been used in North American barns. [ citation needed ]
Mill buildings throughout the Northeast also provide an abundant source of reclaimed wood. Wood that is reclaimed from these buildings includes structural timbers - such as beams, posts, and joists - along with decking, flooring, and sheathing. These buildings often have no economic or reuse possibility, can be a fire hazard, and may require varying degrees of environmental cleanup. Reclaiming lumber and brick from these retired mills is considered a better use of materials than landfill-based disposal.
Another source of reclaimed wood is old snowfence . At the end of their tenure on the mountains and plains of the Rocky Mountain region, snowfence boards are a valued source of consistent, structurally sound and reliable reclaimed wood. [ citation needed ]
Other woods recycled and reprocessed into new wood products include coast redwood , hard maple , Douglas Fir , walnuts, hickories, red and White Oak , and Eastern white pine .
Reclaimed lumber is popular for many reasons: the wood's unique appearance, its contribution to green building , the history of the wood's origins, and the wood's physical characteristics such as strength, stability and durability. [ citation needed ] The increased strength of reclaimed wood is often attributed to the wood often having been harvested from virgin growth timber, which generally grew more slowly, producing a denser grain. [ citation needed ]
Reclaimed beams can often be sawn into wider planks than newly harvested lumber, and many companies claim their products are more stable than newly-cut wood because reclaimed wood has been exposed to changes in humidity for far longer. [ citation needed ]
The reclaimed lumber industry gained momentum in the early 1980s on the West Coast when large-scale reuse of softwoods began. The industry grew due to a growing concern for environmental impact as well as declining quality in new lumber. [ 4 ] On the East Coast, industry pioneers began selling reclaimed wood in the early 1970s but the industry stayed mostly small until the 1990s as waste disposal increased and deconstruction became a more economical alternative to demolition. A trade association, the Reclaimed Wood Council , was formed in May 2003 but dissolved in January 2008 due to a lack of participation among the larger reclaimed wood distributors. [ 5 ]
Reclaimed lumber is sold under a number of names, such as antique lumber, distressed lumber, recovered lumber, upcycled lumber, and others. It is often confused with salvage logging . [ citation needed ]
The Leadership in Energy and Environmental Design (LEED) Green Building Rating System is the US Green Building Council's (USGBC) benchmark for designing, building and operating green buildings. To be certified, projects must first meet the prerequisites designated by the USGBC and then earn a certain number of credits within six categories: sustainable sites, water efficiency , energy and atmosphere, materials and resources, indoor environmental quality, innovation and design process.
Using reclaimed wood can earn credits towards achieving LEED project certification. Because reclaimed wood is considered recycled content, it meets the 'materials and resources' criteria for LEED certification, and because some reclaimed lumber products are Forest Stewardship Council (FSC) certified, they can qualify for LEED credits under the 'certified wood' category. [ 6 ] | https://en.wikipedia.org/wiki/Reclaimed_lumber |
Water reclamation is the process of converting municipal wastewater or sewage and industrial wastewater into water that can be reused for a variety of purposes. It is also called wastewater reuse , water reuse or water recycling . There are many types of reuse. It is possible to reuse water in this way in cities or for irrigation in agriculture. Other types of reuse are environmental reuse, industrial reuse, and reuse for drinking water, whether planned or not. Reuse may include irrigation of gardens and agricultural fields or replenishing surface water and groundwater . This latter is also known as groundwater recharge . Reused water also serve various needs in residences such as toilet flushing , businesses, and industry. It is possible to treat wastewater to reach drinking water standards. Injecting reclaimed water into the water supply distribution system is known as direct potable reuse. Drinking reclaimed water is not typical. [ 1 ] Reusing treated municipal wastewater for irrigation is a long-established practice. This is especially so in arid countries. Reusing wastewater as part of sustainable water management allows water to remain an alternative water source for human activities. This can reduce scarcity . It also eases pressures on groundwater and other natural water bodies. [ 2 ]
There are several technologies used to treat wastewater for reuse. A combination of these technologies can meet strict treatment standards and make sure that the processed water is hygienically safe, meaning free from pathogens . The following are some of the typical technologies: Ozonation , ultrafiltration , aerobic treatment ( membrane bioreactor ), forward osmosis , reverse osmosis , and advanced oxidation , [ 3 ] or activated carbon . [ 4 ] Some water-demanding activities do not require high grade water. In this case, wastewater can be reused with little or no treatment.
The cost of reclaimed water exceeds that of potable water in many regions of the world, where fresh water is plentiful. The costs of water reclamation options might be compared to the costs of alternative options which also achieve similar effects of freshwater savings, namely greywater reuse systems, rainwater harvesting and stormwater recovery , or seawater desalination .
Water recycling and reuse is of increasing importance, not only in arid regions but also in cities and contaminated environments. [ 5 ] Municipal wastewater reuse is particularly high in the Middle East and North Africa region , in countries such as the UAE, Qatar, Kuwait and Israel. [ 6 ]
The term "water reuse" is generally used interchangeably with terms such as wastewater reuse, water reclamation, and water recycling. A definition by the USEPA states: "Water reuse is the method of recycling treated wastewater for beneficial purposes, such as agricultural and landscape irrigation, industrial processes, toilet flushing, and groundwater replenishing (EPA, 2004)." [ 7 ] [ 8 ] A similar description is: "Water Reuse, the use of reclaimed water from treated wastewater, has been a long-established reality in many (semi)arid countries and regions. It helps to alleviate water scarcity by supplementing limited freshwater resources." [ 9 ]
The water that is used as an input to the treatment and reuse processes can be from a variety of sources. Usually it is wastewater ( domestic or municipal, industrial or agricultural wastewater) but it could also come from urban runoff .
Reclaimed water is water that is used more than one time before it passes back into the natural water cycle. Advances in municipal wastewater treatment technology allow communities to reuse water for many different purposes. The water is treated differently depending upon the source and use of the water as well as how it gets delivered.
The World Health Organization has recognized the following principal driving forces for municipal wastewater reuse: [ 10 ] [ 11 ]
In some areas, one driving force is also the implementation of advanced wastewater treatment for the removal of organic micropollutants , which leads to an overall improved water quality. [ 4 ]
Water recycling and reuse is of increasing importance, not only in arid regions but also in cities and contaminated environments. [ 5 ]
Already, the groundwater aquifers that are used by over half of the world population are being over-drafted. [ 12 ] Reuse will continue to increase as the world's population becomes increasingly urbanized and concentrated near coastlines, where local freshwater supplies are limited or are available only with large capital expenditure . [ 13 ] [ 14 ] Large quantities of freshwater can be saved by municipal wastewater reuse and recycling, reducing environmental pollution and improving carbon footprint . [ 5 ] Reuse can be an alternative water supply option.
Achieving more sustainable sanitation and wastewater management will require emphasis on actions linked to resource management, such as wastewater reuse or excreta reuse that will keep valuable resources available for productive uses. [ 2 ] This in turn supports human wellbeing and broader sustainability .
Water/wastewater reuse, as an alternative water source, can provide significant economic, social and environmental benefits, which are key motivators for implementing such reuse programs. These benefits include: [ 15 ] [ 14 ]
Reclaiming water for reuse applications instead of using freshwater supplies can be a water-saving measure. When used water is eventually discharged back into natural water sources, it can still have benefits to ecosystems , improving streamflow, nourishing plant life and recharging aquifers , as part of the natural water cycle . [ 19 ]
Global treated wastewater reuse is estimated at 40.7 billion m 3 per year, representing approximately 11% of the total domestic and manufacturing wastewater produced. [ 6 ] Municipal wastewater reuse is particularly high in the Middle East and North Africa region , in countries such as the UAE, Qatar, Kuwait and Israel. [ 6 ]
For the Sustainable Development Goal 6 by the United Nations, Target 6.3 states "Halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally by 2030". [ 20 ]
Treated wastewater can be reused in industry (for example in cooling towers ), in artificial recharge of aquifers, in agriculture, and in the rehabilitation of natural ecosystems (for example in wetlands ). The main reclaimed water applications in the world are shown below: [ 21 ] [ 22 ] [ 15 ]
In rarer cases reclaimed water is also used to augment drinking water supplies. Most of the uses of water reclamation are non-potable uses such as washing cars, flushing toilets, cooling water for power plants, concrete mixing, artificial lakes, irrigation for golf courses and public parks, and for hydraulic fracturing . Where applicable, systems run a dual piping system to keep the recycled water separate from the potable water.
Usage types are distinguished as follows:
Irrigation with recycled municipal wastewater can also serve to fertilize plants if it contains nutrients, such as nitrogen, phosphorus and potassium. There are benefits of using recycled water for irrigation, including the lower cost compared to some other sources and consistency of supply regardless of season, climatic conditions and associated water restrictions. When reclaimed water is used for irrigation in agriculture, the nutrient (nitrogen and phosphorus) content of the treated wastewater has the benefit of acting as a fertilizer . [ 23 ] This can make the reuse of excreta contained in sewage attractive. [ 10 ]
The irrigation water can be used in different ways on different crops, such as for food crops to be eaten raw or for crops which are intended for human consumption to be eaten raw or unprocessed. For processed food crops: crops which are intended for human consumption not to be eaten raw but after food processing (i.e. cooked, industrially processed). [ 24 ] It can also be used on crops which are not intended for human consumption (e.g. pastures, forage, fiber, ornamental, seed, forest and turf crops). [ 25 ]
In developing countries , agriculture is increasingly using untreated municipal wastewater for irrigation – often in an unsafe manner. Cities provide lucrative markets for fresh produce, so they are attractive to farmers. However, because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with urban waste directly to water their crops.
There can be significant health hazards related to using untreated wastewater in agriculture. Municipal wastewater can contain a mixture of chemical and biological pollutants. In low-income countries, there are often high levels of pathogens from excreta. In emerging nations , where industrial development is outpacing environmental regulation, there are increasing risks from inorganic and organic chemicals. The World Health Organization developed guidelines for safe use of wastewater in 2006, [ 10 ] advocating a ‘multiple-barrier' approach wastewater use, for example by encouraging farmers to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight; applying water carefully so it does not contaminate leaves likely to be eaten raw; cleaning vegetables with disinfectant; or allowing fecal sludge used in farming to dry before being used as a human manure. [ 23 ]
Drawbacks or risks often mentioned include the content of potentially harmful substances such as bacteria, heavy metals, or organic pollutants (including pharmaceuticals, personal care products and pesticides). Irrigation with wastewater can have both positive and negative effects on soil and plants, depending on the composition of the wastewater and on the soil or plant characteristics. [ 26 ]
The use of reclaimed water to create, enhance, sustain, or augment water bodies including wetlands , aquatic habitats, or stream flow is called "environmental reuse". [ 14 ] For example, constructed wetlands fed by wastewater provide both wastewater treatment and habitats for flora and fauna. [ citation needed ]
Treated wastewater can be reused in industry (for example in cooling towers ).
Planned potable reuse is publicly acknowledged as an intentional project to recycle water for drinking water. There are two ways in which potable water can be delivered for reuse – "Indirect Potable Reuse" (IPR) and "Direct Potable Reuse". Both these forms of reuse are described below, and commonly involve a more formal public process and public consultation program than is the case with de facto or unacknowledged reuse. [ 14 ] [ 27 ]
Some water agencies reuse highly treated effluent from municipal wastewater or resource recovery plants as a reliable, drought-proof source of drinking water. By using advanced purification processes, they produce water that meets all applicable drinking water standards. System reliability and frequent monitoring and testing are imperative to their meeting stringent controls. [ 3 ]
The water needs of a community, water sources, public health regulations, costs, and the types of water infrastructure in place— such as distribution systems, man-made reservoirs, or natural groundwater basins— determine if and how reclaimed water can be part of the drinking water supply. Some communities reuse water to replenish groundwater basins. Others put it into surface water reservoirs. In these instances the reclaimed water is blended with other water supplies and/or sits in storage for a certain amount of time before it is drawn out and gets treated again at a water treatment or distribution system. In some communities, the reused water is put directly into pipelines that go to a water treatment plant or distribution system. [ citation needed ]
Modern technologies such as reverse osmosis and ultraviolet disinfection are commonly used when reclaimed water will be mixed with the drinking water supply. [ 3 ]
Many people associate a feeling of disgust with reclaimed water and 13% of a survey group said they would not even sip it. [ 28 ] Nonetheless, the main health risk for potable use of reclaimed water is the potential for pharmaceutical and other household chemicals or their derivatives ( environmental persistent pharmaceutical pollutants ) to persist in this water. [ 29 ] This would be less of a concern if human excreta was kept out of sewage by using dry toilets or, alternatively, systems that treat blackwater separately from greywater .
Indirect potable reuse (IPR) means the water is delivered to the consumer indirectly. After it is purified, the reused water blends with other supplies and/or sits a while in some sort of storage, man-made or natural, before it gets delivered to a pipeline that leads to a water treatment plant or distribution system. That storage could be a groundwater basin or a surface water reservoir.
Some municipalities are using and others are investigating IPR of reclaimed water. For example, reclaimed water may be pumped into (subsurface recharge) or percolated down to (surface recharge) groundwater aquifers, pumped out, treated again, and finally used as drinking water. This technique may also be referred to as groundwater recharging . This includes slow processes of further multiple purification steps via the layers of earth/sand (absorption) and microflora in the soil (biodegradation).
IPR or even unplanned potable use of reclaimed wastewater is used in many countries, where the latter is discharged into groundwater to hold back saline intrusion in coastal aquifers. IPR has generally included some type of environmental buffer, but conditions in certain areas have created an urgent need for more direct alternatives. [ 30 ]
IPR occurs through the augmentation of drinking water supplies with municipal wastewater treated to a level suitable for IPR followed by an environmental buffer (e.g. rivers, dams, aquifers, etc.) that precedes drinking water treatment. In this case, municipal wastewater passes through a series of treatment steps that encompasses membrane filtration and separation processes (e.g. MF, UF and RO), followed by an advanced chemical oxidation process (e.g. UV, UV+H 2 O 2 , ozone). [ 14 ] In ‘indirect' potable reuse applications, the reclaimed wastewater is used directly or mixed with other sources. [ citation needed ]
Direct potable reuse (DPR) means the reused water is put directly into pipelines that go to a water treatment plant or distribution system. Direct potable reuse may occur with or without "engineered storage" such as underground or above ground tanks. [ 14 ] In other words, DPR is the introduction of reclaimed water derived from domestic wastewater after extensive treatment and monitoring to assure that strict water quality requirements are met at all times, directly into a municipal water supply system.
Wastewater reclamation can be especially important in relation to human spaceflight . In 1998, NASA announced it had built a human waste reclamation bioreactor designed for use in the International Space Station and a crewed Mars mission. Human urine and feces are input into one end of the reactor and pure oxygen , pure water , and compost ( humanure ) are output from the other end. The soil could be used for growing vegetables , and the bioreactor also produces electricity . [ 31 ] [ 32 ]
Aboard the International Space Station, astronauts have been able to drink recycled urine due to the introduction of the ECLSS system. The system costs $250 million and has been working since May 2009. The system recycles wastewater and urine back into potable water used for drinking, food preparation, and oxygen generation. This cuts back on the need to frequently resupply the space station. [ 33 ]
De facto, unacknowledged or unplanned potable reuse refers to situations where reuse of treated wastewater is practiced but is not officially recognized. [ 14 ] For example, a sewage treatment plant from one city may be discharging effluents to a river which is used as a drinking water supply for another city downstream. [ citation needed ]
Unplanned Indirect Potable Use [ 34 ] has existed for a long time. Large towns on the River Thames upstream of London ( Oxford , Reading , Swindon , Bracknell ) discharge their treated sewage ("non-potable water") into the Thames, which supplies water to London downstream. In the United States, the Mississippi River serves as both the destination of sewage treatment plant effluent and the source of potable water. [ citation needed ]
Non-potable reclaimed water is often distributed with a dual piping network that keeps reclaimed water pipes completely separate from potable water pipes.
There are several technologies used to treat wastewater for reuse. A combination of these technologies can meet strict treatment standards and make sure that the processed water is hygienically safe, meaning free from pathogens . Some common technologies include ozonation , ultrafiltration , aerobic treatment ( membrane bioreactor ), forward osmosis , reverse osmosis , advanced oxidation [ 3 ] or activated carbon . [ 4 ] Reclaimed water providers use multi-barrier treatment processes and constant monitoring to ensure that reclaimed water is safe and treated properly for the intended end use.
Some water-demanding activities do not require high grade water. In this case, wastewater can be reused with little or no treatment. One example of this scenario is in the domestic environment where toilets can be flushed using greywater from baths and showers with little or no treatment.
In the case of municipal wastewater , the wastewater must pass through numerous sewage treatment process steps before it can be used. Steps might include screening, primary settling, biological treatment, tertiary treatment (for example reverse osmosis), and disinfection.
Wastewater is generally treated to only secondary level treatment when used for irrigation.
A pump station distributes reclaimed water to users around a city. These may include golf courses, agricultural uses, cooling towers, or landfills.
Rather than treating municipal wastewater for reuse purposes, other options can achieve similar effects of freshwater savings:
The cost of reclaimed water exceeds that of potable water in many regions of the world, where fresh water is plentiful. However, reclaimed water is usually sold to citizens at a cheaper rate to encourage its use. As fresh water supplies become limited from distribution costs, increased population demands, or climate change , the cost ratios will evolve also. The evaluation of reclaimed water needs to consider the entire water supply system, as it may bring important flexibility into the overall system. [ 35 ]
Reclaimed water systems usually require a dual piping network, often with additional storage tanks , which adds to the costs of the system.
Barriers to water reclamation may include:
Reclaimed water is considered safe when appropriately used. Reclaimed water planned for use in recharging aquifers or augmenting surface water receives adequate and reliable treatment before mixing with naturally occurring water and undergoing natural restoration processes. Some of this water eventually becomes part of drinking water supplies.
A study published in 2009 compared the differences in water quality between reclaimed/recycled water, surface water, and groundwater. [ 41 ] Results indicated that reclaimed water, surface water, and groundwater are more similar than dissimilar with regard to constituents. The researchers tested for 244 representative constituents typically found in water. When detected, most constituents were in the parts-per-billion and parts-per-trillion range. DEET (an insect repellant) and caffeine were found in all water types and in virtually all samples. Triclosan (in antibacterial soap and toothpaste) was found in all water types, but detected in higher levels (parts-per-trillion) in reclaimed water than in surface or groundwater. Very few hormones/steroids were detected in samples, and when detected were at very low levels. Haloacetic acids (a disinfection by-product) were found in all types of samples, even groundwater. The largest difference between reclaimed water and the other waters appears to be that reclaimed water has been disinfected and thus has disinfection byproducts (due to chlorine use).
A 2005 study found that there had been no instances of illness or disease from either microbial pathogens or chemicals, and the risks of using reclaimed water for irrigation are not measurably different from irrigation using potable water. [ 42 ]
A 2012 study conducted by the National Research Council in the United States found that the risk of exposure to certain microbial and chemical contaminants from drinking reclaimed water does not appear to be higher than the risk experienced in some current drinking water treatment systems, and may be orders of magnitude lower. [ 43 ] This report recommends adjustments to the federal regulatory framework that could enhance public health protection for both planned and unplanned (or de facto reuse) and increase public confidence in water reuse.
Using reclaimed water for non-potable uses saves potable water for drinking, since less potable water will be used for non-potable uses. [ 44 ]
It sometimes contains higher levels of nutrients such as nitrogen , phosphorus and oxygen which may help fertilize garden and agricultural plants when used for irrigation. [ citation needed ]
Fresh water makes up less than 3% of the world's water resources, and just 1% of that is readily available. Even though fresh water is scarce, just 3% of it is extracted for human consumption. The remaining water is mostly used for agriculture, which uses roughly two-thirds of all fresh water. [ 45 ] [ 46 ] [ 47 ]
Reclaimed water can offer a viable and effective alternative to freshwater where freshwater supplies are scarce. Reclaimed water is utilized to maintain or increase lake levels, restore wetlands, and restore river flows during hot weather and droughts, protecting biodiversity. Additionally, reclaimed water is utilized for street cleaning, irrigation of urban green spaces, and industrial processes. Reclaimed water has the advantage of being a consistent source of water supply that is unaffected by seasonal droughts and weather changes. [ 46 ] [ 47 ] [ 48 ]
The usage of water reclamation decreases the pollution sent to sensitive environments. It can also enhance wetlands , which benefits the wildlife depending on that ecosystem . It also helps to reduce the likelihood of drought as recycling of water reduces the use of fresh water supply from underground sources. For instance, the San Jose/Santa Clara Water Pollution Control Plant instituted a water recycling program to protect the San Francisco Bay area's natural salt water marshes. [ 44 ]
The main potential risks that are associated with reclaimed wastewater reuse for irrigation purposes when the treatment is not adequate are the following: [ 49 ] [ 15 ]
Since 26 June 2023 [ 50 ] there is an EU regulation on minimum requirements for water reuse for irrigation purposes. [ 51 ] The water quality requirements are divided into four categories depending on what is irrigated and how the irrigation is performed. The water quality parameters included are E.coli , BOD5, total suspended solids (TSS), turbidity, legionella, and intestinal nematodes (helminth eggs).
In the Water Framework Directive , reuse of water is mentioned as one of the possible measures to achieve the Directive's quality goals. However, this remains a relatively vague recommendation rather than a requirement: Part B of Annex VI refers to reuse as one of the "supplementary measures which Member States within each river basin district may choose to adopt as part of the programme of measures required under Article 11(4)". [ 15 ]
Besides that, Article 12 of the Urban Wastewater Treatment Directive concerning the reuse of treated wastewater states that "treated wastewater shall be reused whenever appropriate", which some consider not specific enough to promote water reuse as it may leave too much room for interpretation as to what can be considered as an "appropriate" situation to reuse treated wastewater.
Despite the lack of common water reuse criteria at the EU level, several member states have issued their own legislative frameworks, regulations, or guidelines for different water reuse applications (e.g. Cyprus, France, Greece, Italy, and Spain).
However, an evaluation carried out by the European Commission on the water reuse standards of several member states concluded that they differed in their approach. There are important differences among the standards regarding permitted uses, parameters to be monitored, and limit values allowed. This lack of harmonization among water reuse standards could potentially create trade barriers for agricultural goods irrigated with reclaimed water. Once on the common market, the level of safety in the producing member states may be not considered sufficient by the importing countries. [ 52 ] The most representative standards on wastewater reuse from European member states are the following: [ 15 ]
By 2023, a new EU agriculture law may raise water reuse by six times, from 1.7 billion m 3 to 6.6 billion m 3 , and cut water stress by 5%. [ 45 ] [ 53 ] [ needs update ]
In the U.S., the Clean Water Act of 1972 mandated elimination of the discharge of untreated waste from municipal and industrial sources to make water safe for fishing and recreation. The US federal government provided billions of dollars in grants for building sewage treatment plants around the country. Modern treatment plants, usually using oxidation and/or chlorination in addition to primary and secondary treatment, were required to meet certain standards. [ 54 ] [ clarification needed ]
Los Angeles County 's sanitation districts started providing treated wastewater for landscape irrigation in parks and golf courses in 1929. The first reclaimed water facility in California was built at San Francisco 's Golden Gate Park in 1932. The Water Replenishment District of Southern California was the first groundwater agency to obtain permitted use of recycled water for groundwater recharge in 1962.
Denver's Direct Potable Water Reuse Demonstration Project [ 55 ] examined the technical, scientific, and public acceptance aspects of DPR from 1979 to 1993. A chronic lifetime whole-animal health effects study on the 1 MGD advanced treatment plant product was conducted in conjunction with a comprehensive assessment of the chemical and microbiological water quality. The $30 million study found that the water produced met all health standards and compared favorably with Denver's high quality drinking water. Further, the projected cost was lower than estimates for obtaining distant new water supplies.
Reclaimed water is not regulated by the U.S. Environmental Protection Agency (EPA), but the EPA has developed water reuse guidelines that were most recently updated in 2012. [ 56 ] [ 57 ] The EPA Guidelines for Water Reuse represents the international standard for best practices in water reuse. The document was developed under a Cooperative Research and Development Agreement between the EPA, the U.S. Agency for International Development (USAID), and the global consultancy CDM Smith . The Guidelines provide a framework for states to develop regulations that incorporate the best practices and address local requirements.
Reuse of reclaimed water is an increasingly common response to water scarcity in many parts of the United States. Reclaimed water is being reused directly for various non-potable uses in the United States, including urban landscape irrigation of parks, school yards, highway medians and golf courses; fire protection; commercial uses such as vehicle washing; industrial reuse such as cooling water, boiler water and process water; environmental and recreational uses such as the creation or restoration of wetlands; as well as agricultural irrigation. [ 58 ] In some cases, such as in Irvine Ranch Water District in Orange County , it is also used for flushing toilets. [ 59 ]
Wastewater reuse (planned or unplanned) is a practice which has been applied throughout human history and is closely connected to the development of sanitation. [ 63 ]
Water reclaimation was pursued primarily due to geopolitical tensions arising from Singapore’s dependency on water imported from Malaysia.
In South Africa, the main driver for wastewater reuse is drought conditions. [ 78 ] For example, in Beaufort West , South Africa's a direct wastewater reclamation plant (WRP) for the production of drinking water was constructed in the end of 2010, as a result of acute water scarcity (production of 2,300 m 3 per day). [ 79 ] [ 80 ] The process configuration based on multi-barrier concept and includes the following treatment processes: sand filtration, UF , two-stage RO , and permeate disinfected by ultraviolet light (UV). | https://en.wikipedia.org/wiki/Reclaimed_water |
A reclaimer is a large machine used in bulk material handling applications. A reclaimer's function is to recover bulk material such as ores and cereals from a stockpile . A stacker is used to stack the material.
Reclaimers are volumetric machines and are rated in m 3 /h (cubic meters per hour) for capacity, which is often converted to t/h (tonnes per hour) based on the average bulk density of the material being reclaimed. Reclaimers normally travel on a rail between stockpiles in the stockyard. A bucket wheel reclaimer can typically move in three directions: horizontally along the rail; vertically by "luffing" its boom and rotationally by slewing its boom. Reclaimers are generally electrically powered by means of a trailing cable.
Bucket wheel reclaimers use " bucket wheels " to remove material from the pile they are reclaiming. Scraper reclaimers use a series of scrapers on a chain to reclaim the material.
The reclaimer structure can be of a number of types, including portal and bridge. Reclaimers are named based on their type, for example, "Bridge reclaimer." Portal and bridge reclaimers can both use either bucket wheels or scrapers to reclaim the product. Bridge type reclaimers blend the stacked product as it is reclaimed.
Whenever material is laid down during any reclaiming process, it creates a pile. Blending bed stacker reclaimers form such piles in a circular fashion. They do this by taking reclaimed material and passing it through a conveyor system that rotates around the center of the pile to create a circle. This allows the pile to be evenly spread out during the reclaiming process and allows for the oldest material in the pile to be reclaimed before the newer material. During this process, a harrow tool is used to cut through the reclaimed material so that the material can be combined. [ 1 ] Some Blending bed reclaimers are equipped with rakes to ensure that no material gets stuck in the machine. These rakes are made with various materials and sizes based on the climate in which the reclaimer operates. In below freezing temperatures, a harder material is used to create a rake with modified edges, which allow for any ice or debris to be broken up before piling. [ 2 ]
Cantilever chain reclaimers are designed to use longer booms. Cantilever chain reclaimers use a truss system that is connected to a liner and then to a chain, this chain is bolted onto the elevation chute and fixed to the reclaimer. The angle of this boom is then set by a cable-winch system and is supported using a cable system. With the cable, the boom can be lowered slightly during each reclaiming cycle. This chain system creates a push and pull effect that allows for any loose material to be collected and moved to the edge of the reclaimed pile. After the loose material is collected it is lifted and moved for further processing. [ 3 ]
A reclaimer is used principally in reclaiming processes. These processes can have low, medium, and high material flow rates. Reclaimers are made up of a bucket-wheel, a counterweight boom, and a rocker; they also use a conveyor system to move any material reclaimed from the boom to its specific pile. These machines can be assembled differently based on the required reclaiming load rate and boom length. These changes are made in order to accommodate for the associated fluctuations in flow rates and load patterns. In the event of high material flow rates, a combination of a boom and bucket wheel is used. [ 2 ]
Stackers and Reclaimers were originally manually controlled machines with no remote control. Modern machines are typically fully automated with their parameters (for stacking or reclaiming) remotely set. Some older reclaimers may still be manually controlled, as reclaiming is more difficult to automate than stacking because the automatic detection of pile edges is complicated by different environmental conditions and different bulk materials. | https://en.wikipedia.org/wiki/Reclaimer |
Recode (stylized as recode ; formerly Re/code ) [ 1 ] was a technology news website that focused on the business of Silicon Valley . Walt Mossberg and Kara Swisher founded it in January 2014, after they left Dow Jones and the similar website they had previously co-founded, All Things Digital . Vox Media acquired Recode in May 2015 and, in May 2019, the Recode website was integrated into Vox . On March 6, 2023, Vox media announced that in order to make the various Vox sub brands less confusing to its readers, it was retiring Recode brand but would continue its mission to explain complex issues around technology to its readers under the unified Vox brand. [ 2 ]
In September 2013, technology journalists Walt Mossberg and Kara Swisher left All Things Digital , the technology news site they had founded and developed for Dow Jones and News Corp . Mossberg left The Wall Street Journal at the end of the year, leaving behind a popular, weekly technology column . [ 3 ] The two launched their new, independent technology news website, Recode , on January 2, 2014. Its holding company, Revere Digital, received minority investments from NBCUniversal and Terry Semel 's Windsor Media. [ 3 ] The total investment was estimated between US$10 and 15 million. Mossberg and Swisher held the company's majority stake and noted its comfortable financial stance. [ 4 ] Recode also provided breaking technology coverage for NBCUniversal, and received video resources and exposure in return via a formal partnership. Mossberg saw the investment as an opportunity to implement new ways of covering the technology field, and planned to add six employees on technology policy and mobile beats. The CNBC partnership also explored new advertising efforts and shared office space. [ 3 ] At launch, the 23-person team included all former members of All Things Digital . The staff also received equity in the company. [ 4 ]
Mossberg and Swisher planned to continue their prominent, annual All Things Digital conference, which they renamed the "Code" conference and scheduled for the same time and location: late May at Terranea Resort in Rancho Palos Verdes, California . Recode also kept plans to continue their separate mobile and media conferences. CNBC became a partner in these conferences. [ 3 ] A part-time team of 12 employees runs the conferences. [ 4 ]
The site developed a reputation for breaking tech industry news but ultimately did not reach the level of popularity it expected, with just 1.5 million regular monthly visitors. Vox Media acquired the website in May 2015 in a move that The New York Times described as a reflection of tumult in online technology journalism . [ 5 ] Vox purchased all of the company's stock, but the details of the transaction were not released. At the time of the acquisition, Recode had 44 employees and three additional employees by contract. They were expected to join Vox. Mossberg and Swisher planned to stay with the website. The two were impressed with Vox Media's audience reach. Vox's technology news website, The Verge , had eight times the traffic, in comparison. The scopes of the two sites were not expected to overlap with Recode 's emphasis on technology industry business and The Verge 's on "being a new kind of culture publication". [ 5 ] An internal study found a three percent overlap in content between the two sites. [ 5 ] Recode started publishing a podcast in July 2015 called Recode Decode . [ 6 ] [ 7 ] The podcast won "Tech Podcast of the Year" as well as "Podcast of the Year" at the 2019 Adweek Podcast Awards. [ 8 ]
On May 8, 2016, Recode relaunched with a new design under editor-in-chief Dan Frommer. [ 1 ] In May 2019, Recode was integrated into Vox Media's flagship website, Vox , becoming the column Recode by Vox . [ 9 ]
As continued from All Things Digital , [ 4 ] Recode focuses on technology and digital media news, particularly pertaining to the business of Silicon Valley . [ 3 ] The site also reviews new enterprises, and consumer hardware and software, and conducts original reports. [ 4 ] | https://en.wikipedia.org/wiki/Recode |
RECODE is a database of "programmed" frameshifts , bypassing and codon redefinition used for gene expression . [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Recode_(database) |
Recognition-primed decision ( RPD ) is a model of how people make quick, effective decisions when faced with complex situations. In this model, the decision maker is assumed to generate a possible course of action, compare it to the constraints imposed by the situation, and select the first course of action that is not rejected. RPD has been described in diverse groups including trauma nurses, fireground commanders, chess players, and stock market traders. It functions well in conditions of time pressure, and in which information is partial and goals poorly defined. The limitations of RPD include the need for extensive experience among decision-makers (in order to correctly recognize the salient features of a problem and model solutions) and the problem of the failure of recognition and modeling in unusual or misidentified circumstances. It appears, as discussed by Gary A. Klein in Sources of Power , [ 1 ] to be a valid model for how human decision-makers make decisions.
The RPD model identifies a reasonable reaction as the first one that is immediately considered. RPD combines two ways of developing a decision; the first is recognizing which course of action makes sense, and the second, evaluating the course of action through imagination to see if the actions resulting from that decision make sense. However, the difference of being experienced or inexperienced plays a major factor in the decision-making processes.
RPD reveals a critical difference between experts and novices when presented with recurring situations. Experienced people will generally be able to come up with a quicker decision because the situation may match a prototypical situation they have encountered before. Novices, lacking this experience, must cycle through different possibilities, and tend to use the first course of action that they believe will work. The inexperienced also have the tendencies of using trial and error through their imagination.
There are three variations in RPD strategy. In Variation 1, decision makers recognize the situation as typical: a scenario where both the situational detail and the detail of relevant courses of action are known. Variation 1 is therefore essentially an “If… then…” reaction. A given situation will lead to an immediate course of action as a function of the situation's typicality. More experienced decision makers are more likely to have the knowledge of both prototypical situations and established courses of action that is required for an RPD strategy to qualify as Variation 1.
Variation 2 occurs when the decision maker diagnoses an unknown situation to choose from a known selection of courses of action. Variation 2 takes the form of “If (???)... then...,” a phrase which implies the decision maker's specific knowledge of available courses of action but lack of knowledge regarding the parameters of the situation. In order to prevent situational complications and the accrual of misinformation, the decision maker models possible details of the situation carefully and then chooses the most relevant known course of action. Experienced decision makers are more likely to correctly model the situation, and are thus more likely to more quickly choose more appropriate courses of action.
In Variation 3, the decision maker is knowledgeable of the situation but unaware of the proper course of action. The decision maker therefore implements a mental trial and error simulation to develop the most effective course of action. Variation 3 takes the form of “If... then... (???)” wherein the decision maker models outcomes of new or uncommon courses of action. The decision maker will cycle through different courses of action until a course of action appears appropriate to the goals and priorities of the situation. Due to the time constraint fundamental to the RPD model, the decision maker will choose the first course of action which appears appropriate to the situation. Experienced decision makers are likely to develop a viable course of action more quickly because their expert knowledge can rapidly be used to disqualify inappropriate courses of action.
Recognition-primed decision making is highly relevant to the leaders or officers of organizations that are affiliated with emergency services such as fire fighters, search and rescue units, police, and other emergency services. It is applied to both the experienced and the inexperienced, and how they manage their decision making processes. The Recognition-primed decision making model is developed as samples for organizations on how important decisions can affect important situations which may either save lives or take lives. The model developed can be used as a study for organizations to fill in the gaps and to determine which type of RPD variation is more applicable to the organization. | https://en.wikipedia.org/wiki/Recognition-primed_decision |
The recognition heuristic , originally termed the recognition principle, has been used as a model in the psychology of judgment and decision making and as a heuristic in artificial intelligence . The goal is to make inferences about a criterion that is not directly accessible to the decision maker, based on recognition retrieved from memory. This is possible if recognition of alternatives has relevance to the criterion. For two alternatives, the heuristic is defined as: [ 1 ] [ 2 ] [ 3 ]
If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion.
The recognition heuristic is part of the "adaptive toolbox" of "fast and frugal" heuristics proposed by Gigerenzer and Goldstein. It is one of the most frugal of these, meaning it is simple or economical. [ 3 ] [ 4 ] [ 5 ] In their original experiment, Daniel Goldstein and Gerd Gigerenzer quizzed students in Germany and the United States on the populations of both German and American cities. Participants received pairs of city names and had to indicate which city has more inhabitants. In this and similar experiments, the recognition heuristic typically describes about 80–90% of participants' choices, in cases where they recognize one but not the other object (see criticism of this measure below). Surprisingly, American students scored higher on German cities, while German participants scored higher on American cities, despite only recognizing a fraction of the foreign cities. This has been labeled the " less-is-more effect " and mathematically formalized. [ 6 ]
The recognition heuristic is posited as a domain-specific strategy for inference. It is ecologically rational to rely on the recognition heuristic in domains where there is a correlation between the criterion and recognition. The higher the recognition validity α for a given criterion, the more ecologically rational it is to rely on this heuristic and the more likely people will rely on it. For each individual, α can be computed by
where C is the number of correct inferences the recognition heuristic would make, computed across all pairs in which one alternative is recognized and the other is not, and W is the number of wrong inferences. Domains in which the recognition heuristic was successfully applied include the prediction of geographical properties (such as the size of cities, mountains, etc.), [ 1 ] [ 2 ] of sports events (such as Wimbledon and soccer championships [ 7 ] [ 8 ] [ 9 ] ) and elections. [ 10 ] Research also shows that the recognition heuristic is relevant to marketing science. Recognition based heuristics help consumers choose which brands to buy in frequently purchased categories. [ 11 ] A number of studies addressed the question of whether people rely on the recognition heuristic in an ecologically rational way. For instance, name recognition of Swiss cities is a valid predictor of their population (α = 0.86) but not their distance from the center of Switzerland (α = 0.51). Pohl [ 12 ] reported that 89% of inferences accorded with the model in judgments of population, compared to only 54% in judgments of the distance. More generally, there is a positive correlation of r = 0.64 between the recognition validity and the proportion of judgments consistent with the recognition heuristic across 11 studies. [ 13 ] Another study by Pachur [ 14 ] suggested that the recognition heuristic is more likely a tool for exploring natural rather than induced recognition (i.e. not provoked in a laboratory setting) when inferences have to be made from memory. In one of his experiments, the results showed that there was a difference between participants in an experimental setting vs. a non-experimental setting.
If α > β, and α, β are independent of n, then a less-is-more effect will be observed. Here, β is the knowledge validity, measured as C/(C+W) for all pairs in which both alternatives are recognized, and n is the number of alternatives an individual recognizes. A less-is-more effect means that the function between accuracy and n is inversely U-shaped rather than monotonically increasing. Some studies reported less-is-more effects empirically among two, three, or four alternatives [ 1 ] [ 2 ] [ 15 ] and in group decisions [ 16 ] ), whereas others failed to do so, [ 9 ] [ 12 ] possibly because the effect is predicted to be small (see Katsikopoulos [ 17 ] ).
Smithson explored the "less-is-more effect" (LIME) with the recognition heuristic and challenges some of the original assumptions. The LIME occurs when a "recognition-dependent agent has a greater probability of choosing the better item than a more knowledgeable agent who recognizes more items." A mathematical model is used in describing the LIME and Smithson’s study used it and attempted to modify it. The study was meant to mathematically provide an understanding of when the LIME occurs and explain the implications of the results. The main implication is "that the advantage of the recognition cue depends not only on the cue validities, but also on the order in which items are learned". [ 18 ]
The recognition heuristic can also be depicted using neuroimaging techniques. A number of studies have shown that people do not automatically use the recognition heuristic when it can be applied, but evaluate its ecological validity. It is less clear, however, how this evaluation process can be modeled. A functional magnetic resonance imaging study tested whether the two processes, recognition and evaluation, can be separated on a neural basis. [ 19 ] Participants were given two tasks; the first involved only a recognition judgment ("Have you ever heard of Modena? Milan?"), while the second involved an inference in which participants could rely on the recognition heuristic ("Which city has the larger population: Milan or Modena?"). For mere recognition judgments, activation in the precuneus, an area that is known from independent studies to respond to recognition confidence, [ 20 ] was reported. In the inference task, precuneus activation was also observed, as predicted, and activation was detected in the anterior frontomedian cortex (aFMC), which has been linked in earlier studies to evaluative judgments and self-referential processing. The aFMC activation could represent the neural basis of this evaluation of ecological rationality.
Some researchers have used event-related potentials (ERP) to test psychological mechanisms behind the recognition heuristic. Rosburg, Mecklinger, and Frings used a standard procedure with a city-size comparison task, similar to that used by Goldstein and Gigerenzer. They used ERP and analyzed familiarity-based recognition occurring 300-450 milliseconds after stimulus onset in order to predict the participants’ decisions. Familiarity-based recognition processes are relatively automatic and fast so these results provide evidence that simple heuristics like the recognition heuristic utilize basic cognitive processes. [ 21 ]
Research on the recognition heuristic has sparked a number of controversies.
The recognition heuristic is a model that relies on recognition only. This leads to the testable prediction that people who rely on it will ignore strong, contradicting cues (i.e., do not make trade-offs; so-called noncompensatory inferences). In an experiment by Daniel M. Oppenheimer participants were presented with pairs of cities, which included actual cities and fictional cities. Although the recognition heuristic predicts that participants would judge the actual (recognizable) cities to be larger, participants judged the fictional (unrecognizable) cities to be larger, showing that more than recognition can play a role in such inferences. [ 22 ]
Newell & Fernandez [ 4 ] performed two experiments to try to test the claims that the recognition heuristic is distinguished from availability and fluency through binary treatment of information and inconsequentiality of further knowledge. The results of their experiments did not support these claims. Newell & Fernandez and Richter & Späth tested the non-compensatory prediction of the recognition heuristic and stated that "recognition information is not used in an all-or-none fashion but is integrated with other types of knowledge in judgment and decision making." [ 23 ]
A reanalysis of these studies at an individual level, however, showed that typically about half of the participants consistently followed the recognition heuristic in every single trial, even in the presence of up to three contradicting cues. [ 24 ] Furthermore, in response to those criticisms, Marewski et al. [ 25 ] pointed out that none of the studies above formulated and tested a compensatory strategy against the recognition heuristic, leaving the strategies that participants relied on unknown. They tested five compensatory models and found that none could predict judgments better than the simple model of the recognition heuristic.
One major criticism of studies on the recognition heuristic that was raised was that mere accordance with the recognition heuristic is not a good measure of its use. As an alternative, Hilbig et al. proposed to test the recognition heuristic more precisely devised a multinomial processing tree model for the recognition heuristic. A multinomial processing tree model is a simple statistical model often used in cognitive psychology for categorical data . [ 26 ] Hilbig et al. claimed that a new model of recognition heuristic use was needed due to the confound between recognition and further knowledge. The multinomial processing tree model was shown to be effective and Hilbig et al. claimed that it provided an unbiased measure of the recognition heuristic. [ 27 ]
Pachur [ 28 ] stated that it is an imperfect model but currently it is still the best model to predict people’s recognition-based inferences. He believes that precise tests have a limited value basically because certain aspects of the recognition heuristic are often ignored and so the results could be inconsequential or misleading.
Hilbig et al. [ 27 ] state that heuristics are meant to reduce effort and that the recognition heuristic reduces effort in making judgments by relying on one single cue and ignoring other information. In their study, they found that the recognition heuristic is more useful in deliberate thought than in intuitive thought. This means it is more useful when thoughts are intentional and not impulsive as opposed to intuitive thought, which is based more on impulse rather than conscious reasoning. [ 29 ] In contrast, a study by Pachur and Hertwig [ 30 ] found that it is actually the faster responses that are more in line with the recognition heuristic. Also, judgments accorded more strongly with the recognition heuristic under time pressure. In line with these findings, neural evidence suggests that the recognition heuristic may be relied upon by default. [ 19 ]
Goldstein and Gigerenzer [ 31 ] state that due to its simplicity, the recognition heuristic shows to what degree and in what situations behavior can be predicted. Some researchers suggest that the idea of the recognition heuristic should be retired but Pachur believes that a different approach should be taken in testing it. There are some researchers who believe that the recognition heuristic should be investigated using precise tests of the exclusive use of recognition.
Another study by Pachur [ 14 ] suggested that the recognition heuristic is more likely a tool for exploring natural rather than induced recognition (i.e. not provoked in a laboratory setting) when inferences have to be made from memory. In one of his experiments, the results showed that there was a difference between participants in an experimental setting vs. a non-experimental setting.
Using an adversarial collaboration approach, three special issues of the open access journal Judgment and Decision Making have been devoted to unravel the support for and problems with the recognition heuristic, providing the most recent and comprehensive synopsis of the epistemic status quo. In their Editorial to Issue III, the three guest editors strive for a cumulative theory integration. [ 32 ] | https://en.wikipedia.org/wiki/Recognition_heuristic |
A recognition sequence is a DNA sequence to which a structural motif of a DNA-binding domain exhibits binding specificity . Recognition sequences are palindromes . [ 1 ]
The transcription factor Sp1 for example, binds the sequences 5'-(G/T)GGGCGG(G/A)(G/A)(C/T)-3', where (G/T) indicates that the domain will bind a guanine or thymine at this position.
The restriction endonuclease PstI recognizes, binds, and cleaves the sequence 5'-CTGCAG-3'.
A recognition sequence is different from a recognition site . A given recognition sequence can occur one or more times, or not at all, on a specific DNA fragment. A recognition site is specified by the position of the site. For example, there are two PstI recognition sites in the following DNA sequence fragment, starting at base 9 and 31 respectively. A recognition sequence is a specific sequence, usually very short (less than 10 bases). Depending on the degree of specificity of the protein, a DNA-binding protein can bind to more than one specific sequence. For PstI, which has a single sequence specificity, it is 5'-CTGCAG-3'. It is always the same whether at the first recognition site or the second in the following example sequence. For Sp1, which has multiple (16) sequence specificity as shown above, the two recognition sites in the following example sequence fragment are at 18 and 32, and their respective recognition sequences are 5'-GGGGCGGAGC-3' and 5'-TGGGCGGAAC-3'.
5'-AACGTTAG CTGCAG TC GGGGCGGAGC TAGG CTGCAG GAAT TGGGCGGAAC CT-3'
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Recognition_sequence |
A recognition signal is a signal whereby a person, a ship , an airplane or something else is recognized . They can be used during war or can be used to help the police recognize each other during undercover operations .
These signals are often used to recognize friends and enemies in a war. [ 1 ] [ 2 ] For military use these signals often use colored lights or the International marine signal flags .
Other uses of the signal include the police who sometimes use a recognition signal so that officers in uniform can recognize officers in normal clothing (undercover). [ 3 ] [ 4 ] The NYPD often use headbands , wristbands or colored clothing as recognition signals which are known as the " color of the day ". [ 4 ] | https://en.wikipedia.org/wiki/Recognition_signal |
Recoil is a rheological phenomenon observed only in non-Newtonian fluids that is characterized by a moving fluid's ability to snap back to a previous position when external forces are removed. Recoil is a result of the fluid's elasticity and memory where the speed and acceleration by which the fluid moves depends on the molecular structure and the location to which it returns depends on the conformational entropy . This effect is observed in numerous non-Newtonian liquids to a small degree, but is prominent in some materials such as molten polymers .
The degree to which a fluid will “remember” where it came from depends on the entropy. Viscoelastic properties in fluids cause them to snap back to entropically favorable conformations. [ 1 ] Recoil is observed when a favorable conformation is in the fluid's recent past. However, the fluid cannot fully return to its original position due to energy losses stemming from less than perfect elasticity.
Recoiling fluids display fading memory meaning the longer a fluid is elongated, the less it will recover. Recoil is related to characteristic time, an estimate of the order of magnitude of reaction for the system. Fluids that are described as recoiling generally have characteristic times on the order of a few seconds. [ 2 ] Although recoiling fluids usually recover relatively small distances, some molten polymers can recover back to 1/10 of the total elongation. [ 3 ] This property of polymers must be accounted for in polymer processing.
When a spinning rod is placed in a polymer solution, elastic forces generated by the rotation motion cause fluid to climb up the rod (a phenomenon known as the Weissenberg effect ). If the torque being applied is immediately brought to a stop, the fluid recoils down the rod.
When a viscoelastic fluid being poured from a beaker is quickly cut with a pair of scissors, the fluid recoils back into the beaker.
When fluid at rest in a circular tube is subjected to a pressure drop, a parabolic flow distribution is observed that pulls the liquid down the tube. Immediately after the pressure is alleviated, the fluid recoils backward in the tube and forms a more blunt flow profile.
When Silly Putty is rapidly stretched and held at an elongated position for a short period of time, it springs back. However, if it is held at an elongated position for a longer period of time, there is very little recovery and no visible recoil. [ 4 ] | https://en.wikipedia.org/wiki/Recoil_(rheology) |
Recombinant adeno-associated virus ( rAAV ) based genome engineering is a genome editing platform centered on the use of recombinant AAV vectors that enables insertion, deletion or substitution of DNA sequences into the genomes of live mammalian cells. The technique builds on Mario Capecchi and Oliver Smithies ' Nobel Prize –winning discovery that homologous recombination (HR), a natural hi-fidelity DNA repair mechanism, can be harnessed to perform precise genome alterations in mice. rAAV mediated genome-editing improves the efficiency of this technique to permit genome engineering in any pre-established and differentiated human cell line, which, in contrast to mouse ES cells, have low rates of HR.
The technique has been widely adopted for use in engineering human cell lines to generate isogenic human disease models . It has also been used to optimize bioproducer cell lines for the biomanufacturing of protein vaccines and therapeutics. In addition, due to the non-pathogenic nature of rAAV, it has emerged as a desirable vector for performing gene therapy in live patients.
The rAAV genome is built of single-stranded deoxyribonucleic acid (ssDNA), either positive- or negative-sensed, which is about 4.7 kilobases long. These single-stranded DNA viral vectors have high transduction rates and have a unique property of stimulating endogenous HR without causing double strand DNA breaks in the genome, which is typical of other homing endonuclease mediated genome editing methods.
Users can design a rAAV vector to any target genomic locus and perform both gross and subtle endogenous gene alterations in mammalian somatic cell-types. These include gene knock-outs for functional genomics, or the ‘knock-in’ of protein tag insertions to track translocation events at physiological levels in live cells. Most importantly, rAAV targets a single allele at a time and does not result in any off-target genomic alterations. [ 2 ] Because of this, it is able to routinely and accurately model genetic diseases caused by subtle SNPs or point mutations that are increasingly the targets of novel drug discovery programs. [ 2 ]
To date, the use of rAAV mediated genome engineering has been published in over 2100 peer reviewed scientific journals. [ 3 ] Another emerging application of rAAV based genome editing is for gene therapy in patients, due to the accuracy and lack of off-target recombination events afforded by the approach. | https://en.wikipedia.org/wiki/Recombinant_AAV_mediated_genome_engineering |
Recombinant DNA ( rDNA ) molecules are DNA molecules formed by laboratory methods of genetic recombination (such as molecular cloning ) that bring together genetic material from multiple sources, creating sequences that would not otherwise be found in the genome .
Recombinant DNA is the general name for a piece of DNA that has been created by combining two or more fragments from different sources. Recombinant DNA is possible because DNA molecules from all organisms share the same chemical structure, differing only in the nucleotide sequence. Recombinant DNA molecules are sometimes called chimeric DNA because they can be made of material from two different species like the mythical chimera . rDNA technology uses palindromic sequences and leads to the production of sticky and blunt ends .
The DNA sequences used in the construction of recombinant DNA molecules can originate from any species . For example, plant DNA can be joined to bacterial DNA, or human DNA can be joined with fungal DNA. In addition, DNA sequences that do not occur anywhere in nature can be created by the chemical synthesis of DNA and incorporated into recombinant DNA molecules. Using recombinant DNA technology and synthetic DNA, any DNA sequence can be created and introduced into living organisms.
Proteins that can result from the expression of recombinant DNA within living cells are termed recombinant proteins . When recombinant DNA encoding a protein is introduced into a host organism, the recombinant protein is not necessarily produced. [ 1 ] Expression of foreign proteins requires the use of specialized expression vectors and often necessitates significant restructuring by
foreign coding sequences. [ 2 ]
Recombinant DNA differs from genetic recombination in that the former results from artificial methods while the latter is a normal biological process that results in the remixing of existing DNA sequences in essentially all organisms.
Molecular cloning is the laboratory process used to produce recombinant DNA. [ 3 ] [ 4 ] [ 5 ] [ 6 ] It is one of two most widely used methods, along with polymerase chain reaction (PCR), used to direct the replication of any specific DNA sequence chosen by the experimentalist. There are two fundamental differences between the methods. One is that molecular cloning involves replication of the DNA within a living cell, while PCR replicates DNA in the test tube, free of living cells. The other difference is that cloning involves cutting and pasting DNA sequences, while PCR amplifies by copying an existing sequence.
Formation of recombinant DNA requires a cloning vector , a DNA molecule that replicates within a living cell. Vectors are generally derived from plasmids or viruses , and represent relatively small segments of DNA that contain necessary genetic signals for replication, as well as additional elements for convenience in inserting foreign DNA, identifying cells that contain recombinant DNA, and, where appropriate, expressing the foreign DNA. The choice of vector for molecular cloning depends on the choice of host organism, the size of the DNA to be cloned, and whether and how the foreign DNA is to be expressed. [ 7 ] The DNA segments can be combined by using a variety of methods, such as restriction enzyme/ligase cloning or Gibson assembly . [ citation needed ]
In standard cloning protocols, the cloning of any DNA fragment essentially involves seven steps: (1) Choice of host organism and cloning vector, (2) Preparation of vector DNA, (3) Preparation of DNA to be cloned, (4) Creation of recombinant DNA, (5) Introduction of recombinant DNA into the host organism, (6) Selection of organisms containing recombinant DNA, and (7) Screening for clones with desired DNA inserts and biological properties. [ 6 ] These steps are described in some detail in a related article ( molecular cloning ).
DNA expression requires the transfection of suitable host cells. Typically, either bacterial, yeast, insect, or mammalian cells (such as Human Embryonic Kidney cells or CHO cells ) are used as host cells. [ 8 ]
Following transplantation into the host organism, the foreign DNA contained within the recombinant DNA construct may or may not be expressed . That is, the DNA may simply be replicated without expression, or it may be transcribed and translated and a recombinant protein is produced. Generally speaking, expression of a foreign gene requires restructuring the gene to include sequences that are required for producing an mRNA molecule that can be used by the host's translational apparatus (e.g. promoter , translational initiation signal , and transcriptional terminator ). [ 9 ] Specific changes to the host organism may be made to improve expression of the ectopic gene. In addition, changes may be needed to the coding sequences as well, to optimize translation, make the protein soluble, direct the recombinant protein to the proper cellular or extracellular location, and stabilize the protein from degradation. [ 10 ] [ 11 ] [ 12 ]
In most cases, organisms containing recombinant DNA have apparently normal phenotypes . That is, their appearance, behavior and metabolism are usually unchanged, and the only way to demonstrate the presence of recombinant sequences is to examine the DNA itself, typically using a polymerase chain reaction (PCR) test. [ 13 ] Significant exceptions exist, and are discussed below.
If the rDNA sequences encode a gene that is expressed, then the presence of RNA and/or protein products of the recombinant gene can be detected, typically using RT-PCR or western hybridization methods. [ 13 ] Gross phenotypic changes are not the norm, unless the recombinant gene has been chosen and modified so as to generate biological activity in the host organism. [ 14 ] Additional phenotypes that are encountered include toxicity to the host organism induced by the recombinant gene product, especially if it is over-expressed or expressed within inappropriate cells or tissues. [ citation needed ]
In some cases, recombinant DNA can have deleterious effects even if it is not expressed. One mechanism by which this happens is insertional inactivation , in which the rDNA becomes inserted into a host cell's gene. In some cases, researchers use this phenomenon to " knock out " genes to determine their biological function and importance. [ 15 ] Another mechanism by which rDNA insertion into chromosomal DNA can affect gene expression is by inappropriate activation of previously unexpressed host cell genes. This can happen, for example, when a recombinant DNA fragment containing an active promoter becomes located next to a previously silent host cell gene, or when a host cell gene that functions to restrain gene expression undergoes insertional inactivation by recombinant DNA. [ citation needed ]
Recombinant DNA is widely used in biotechnology , medicine and research. Today, recombinant proteins and other products that result from the use of DNA technology are found in essentially every pharmacy, physician or veterinarian office, medical testing laboratory, and biological research laboratory. In addition, organisms that have been manipulated using recombinant DNA technology, as well as products derived from those organisms, have found their way into many farms, supermarkets , home medicine cabinets , and even pet shops, such as those that sell GloFish and other genetically modified animals .
The most common application of recombinant DNA is in basic research, in which the technology is important to most current work in the biological and biomedical sciences. [ 13 ] Recombinant DNA is used to identify, map and sequence genes, and to determine their function. rDNA probes are employed in analyzing gene expression within individual cells, and throughout the tissues of whole organisms. Recombinant proteins are widely used as reagents in laboratory experiments and to generate antibody probes for examining protein synthesis within cells and organisms. [ 4 ]
Many additional practical applications of recombinant DNA are found in industry, food production, human and veterinary medicine, agriculture, and bioengineering. [ 4 ] Some specific examples are identified below.
Found in rennet , chymosin is the enzyme responsible for hydrolysis of κ - casein to produce para- κ -casein and glycomacropeptide , which is the first step in formation of cheese , and subsequently curd , and whey . [ 16 ] It was the first genetically engineered food additive used commercially. Traditionally, processors obtained chymosin from rennet, a preparation derived from the fourth stomach of milk-fed calves. Scientists engineered a non-pathogenic strain (K-12) of E. coli bacteria for large-scale laboratory production of the enzyme. This microbiologically produced recombinant enzyme, identical structurally to the calf derived enzyme, costs less and is produced in abundant quantities. Today about 60% of U.S. hard cheese is made with genetically engineered chymosin. In 1990, FDA granted chymosin " generally recognized as safe " (GRAS) status based on data showing that the enzyme was safe. [ 17 ]
Recombinant human insulin has almost completely replaced insulin obtained from animal sources (e.g. pigs and cattle) for the treatment of type 1 diabetes . A variety of different recombinant insulin preparations are in widespread use. [ 18 ] Recombinant insulin is synthesized by inserting the human insulin gene into E. coli , or yeast (Saccharomyces cerevisiae) [ 19 ] which then produces insulin for human use. [ 20 ] Insulin produced by E. coli requires further post translational modifications (e.g. glycosylation) whereas yeasts are able to perform these modifications themselves by virtue of being more complex host organisms. The advantage of recombinant human insulin is after chronic use patients don't develop an immune defence against it the way animal sourced insulin stimulates the human immune system. [ 21 ]
Administered to patients whose pituitary glands generate insufficient quantities to support normal growth and development. Before recombinant HGH became available, HGH for therapeutic use was obtained from pituitary glands of cadavers. This unsafe practice led to some patients developing Creutzfeldt–Jakob disease . Recombinant HGH eliminated this problem, and is now used therapeutically. [ 22 ] It has also been misused as a performance-enhancing drug by athletes and others. [ 23 ] [ 24 ]
It is the recombinant form of factor VIII , a blood-clotting protein that is administered to patients with the bleeding disorder hemophilia , who are unable to produce factor VIII in quantities sufficient to support normal blood coagulation. [ 25 ] Before the development of recombinant factor VIII, the protein was obtained by processing large quantities of human blood from multiple donors, which carried a very high risk of transmission of blood borne infectious diseases , for example HIV and hepatitis B.
Hepatitis B infection can be successfully controlled through the use of a recombinant subunit hepatitis B vaccine , which contains a form of the hepatitis B virus surface antigen that is produced in yeast cells. The development of the recombinant subunit vaccine was an important and necessary development because hepatitis B virus, unlike other common viruses such as polio virus , cannot be grown in vitro . [ 26 ]
Recombinant antibodies (rAbs) are produced in vitro by the means of expression systems based on mammalian cells. Their monospecific binding to a specific epitope makes rAbs eligible not only for research purposes, but also as therapy options against certain cancer types, infections and autoimmune diseases. [ 27 ]
Each of the three widely used methods for diagnosing HIV infection has been developed using recombinant DNA. The antibody test ( ELISA or western blot ) uses a recombinant HIV protein to test for the presence of antibodies that the body has produced in response to an HIV infection. The DNA test looks for the presence of HIV genetic material using reverse transcription polymerase chain reaction (RT-PCR). Development of the RT-PCR test was made possible by the molecular cloning and sequence analysis of HIV genomes. HIV testing page from US Centers for Disease Control (CDC)
Golden rice is a recombinant variety of rice that has been engineered to express the enzymes responsible for β-carotene biosynthesis. [ 14 ] This variety of rice holds substantial promise for reducing the incidence of vitamin A deficiency in the world's population. [ 28 ] Golden rice is not currently in use, pending the resolution of regulatory and intellectual property issues. [ 29 ]
Commercial varieties of important agricultural crops (including soy, maize/corn, sorghum, canola, alfalfa and cotton) have been developed that incorporate a recombinant gene that results in resistance to the herbicide glyphosate (trade name Roundup ), and simplifies weed control by glyphosate application. [ 30 ] These crops are in common commercial use in several countries.
Bacillus thuringiensis is a bacterium that naturally produces a protein ( Bt toxin ) with insecticidal properties. [ 28 ] The bacterium has been applied to crops as an insect-control strategy for many years, and this practice has been widely adopted in agriculture and gardening. Recently, plants have been developed that express a recombinant form of the bacterial protein, which may effectively control some insect predators. Environmental issues associated with the use of these transgenic crops have not been fully resolved. [ 31 ]
The idea of recombinant DNA was first proposed by Peter Lobban, a graduate student of Prof. Dale Kaiser in the Biochemistry Department at Stanford University Medical School. [ 32 ] The first publications describing the successful production and intracellular replication of recombinant DNA appeared in 1972 and 1973, from Stanford and UCSF . [ 33 ] [ 34 ] [ 35 ] [ 36 ] In 1980 Paul Berg , a professor in the Biochemistry Department at Stanford and an author on one of the first papers [ 33 ] was awarded the Nobel Prize in Chemistry for his work on nucleic acids "with particular regard to recombinant DNA". Werner Arber , Hamilton Smith , and Daniel Nathans shared the 1978 Nobel Prize in Physiology or Medicine for the discovery of restriction endonucleases which enhanced the techniques of rDNA technology. [ citation needed ]
Stanford University applied for a U.S. patent on recombinant DNA on November 4, 1974, listing the inventors as Herbert W. Boyer (professor at the University of California, San Francisco ) and Stanley N. Cohen (professor at Stanford University ); this patent, U.S. 4,237,224A, was awarded on December 2, 1980. [ 37 ] [ 38 ] The first licensed drug generated using recombinant DNA technology was human insulin, developed by Genentech and licensed by Eli Lilly and Company . [ 39 ]
Scientists associated with the initial development of recombinant DNA methods recognized that the potential existed for organisms containing recombinant DNA to have undesirable or dangerous properties. At the 1975 Asilomar Conference on Recombinant DNA , these concerns were discussed and a voluntary moratorium on recombinant DNA research was initiated for experiments that were considered particularly risky. This moratorium was widely observed until the US National Institutes of Health developed and issued formal guidelines for rDNA work. Today, recombinant DNA molecules and recombinant proteins are usually not regarded as dangerous. However, concerns remain about some organisms that express recombinant DNA, particularly when they leave the laboratory and are introduced into the environment or food chain. These concerns are discussed in the articles on genetically modified organisms and genetically modified food controversies . Furthermore, there are concerns about the by-products in biopharmaceutical production, where recombinant DNA result in specific protein products. The major by-product, termed host cell protein , comes from the host expression system and poses a threat to the patient's health and the overall environment. [ 40 ] [ 41 ] | https://en.wikipedia.org/wiki/Recombinant_DNA |
Recombinases are genetic recombination enzymes .
DNA recombinases are widely used in multicellular organisms to manipulate the structure of genomes , and to control gene expression . These enzymes, derived from bacteria ( bacteriophages ) and fungi , catalyze directionally sensitive DNA exchange reactions between short (30–40 nucleotides ) target site sequences that are specific to each recombinase . These reactions enable four basic functional modules: excision/insertion, inversion, translocation and cassette exchange, which have been used individually or combined in a wide range of configurations to control gene expression. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Types include:
Recombinases have a central role in homologous recombination in a wide range of organisms. Such recombinases have been described in archaea , bacteria , eukaryotes and viruses .
The archaeon Sulfolobus solfataricus RadA recombinase catalyzes DNA pairing and strand exchange, central steps in recombinational repair. [ 6 ] The RadA recombinase has greater similarity to the eukaryotic Rad51 recombinase than to the bacterial RecA recombinase. [ 6 ]
RecA recombinase appears to be universally present in bacteria. RecA has multiple functions, all related to DNA repair . RecA has a central role in the repair of replication forks stalled by DNA damage and in the bacterial sexual process of natural genetic transformation . [ 7 ] [ 8 ]
Eukaryotic Rad51 and its related family members are homologous to the archaeal RadA and bacterial RecA recombinases. Rad51 is highly conserved from yeast to humans. It has a key function in the recombinational repair of DNA damages, particularly double-strand damages such as double-strand breaks. In humans, over- or under- expression of Rad51 occurs in a wide variety of cancers .
During meiosis Rad51 interacts with another recombinase, Dmc1 , to form a presynaptic filament that is an intermediate in homologous recombination . [ 9 ] Dmc1 function appears to be limited to meiotic recombination. Like Rad51, Dmc1 is homologous to bacterial RecA.
Some DNA viruses encode a recombinase that facilitates homologous recombination. A well-studied example is the UvsX recombinase encoded by bacteriophage T4 . [ 10 ] UvsX is homologous to bacterial RecA. UvsX, like RecA, can facilitate the assimilation of linear single-stranded DNA into an homologous DNA duplex to produce a D-loop .
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Recombinase |
RMCE ( recombinase-mediated cassette exchange ) is a procedure in reverse genetics allowing the systematic, repeated modification of higher eukaryotic genomes by targeted integration, based on the features of site-specific recombination processes (SSRs). For RMCE, this is achieved by the clean exchange of a preexisting gene cassette for an analogous cassette carrying the "gene of interest" (GOI).
The genetic modification of mammalian cells is a standard procedure for the production of correctly modified proteins with pharmaceutical relevance. To be successful, the transfer and expression of the transgene has to be highly efficient and should have a largely predictable outcome. Current developments in the field of gene therapy are based on the same principles. Traditional procedures used for transfer of GOIs are not sufficiently reliable, mostly because the relevant epigenetic influences have not been sufficiently explored: transgenes integrate into chromosomes with low efficiency and at loci that provide only sub-optimal conditions for their expression . As a consequence the newly introduced information may not be realized (expressed), the gene(s) may be lost and/or re-insert and they may render the target cells in unstable state. It is exactly this point where RMCE enters the field. The procedure was introduced in 1994 [ 1 ] and it uses the tools yeasts and bacteriophages [ 2 ] have evolved for the efficient replication of important genetic information:
Most yeast strains contain circular, plasmid-like DNAs called "two-micron circles". The persistence of these entities is granted by a recombinase called "flippase" or "Flp" . Four monomers of this enzyme associate with two identical short (48 bp) target sites, called FRT ("flip-recombinase targets"), resulting in their crossover . The outcome of such a process depends on the relative orientation of the participating FRTs leading to
This spectrum of options could be extended significantly by the generation of spacer mutants for extended 48 bp FRT sites (cross-hatched half-arrows in Figure 1). Each mutant Fn recombines with an identical mutant Fn with an efficiency equal to the wildtype sites (F x F). A cross-interaction (F x Fn) is strictly prevented by the particular design of these components. This sets the stage for the situation depicted in Figure 1A:
First applied for the Tyr-recombinase Flp, this novel procedure is not only relevant to the rational construction of biotechnologically significant cell lines, but it also finds increasing use for the systematic generation of stem cells . Stem cells can be used to replace damaged tissue or to generate transgenic animals with largely pre-determined properties.
It has been previously established that coexpression of both Cre and Flp recombinases catalyzes the exchange of sequences flanked by single loxP and FRT sites integrated into the genome at a random location. However, these studies did not explore whether such an approach could be used to modify conditional mouse alleles carrying single or multiple loxP and FRT sites. dual RMCE (dRMCE; Osterwalder et al., 2010) was recently developed as a re-engineering tool applicable to the vast numbers of mouse conditional alleles that harbor wild-type loxP and FRT sites and therefore are not compatible with conventional RMCE. The general dRMCE strategy takes advantage of the fact that most conditional alleles encode a selection cassette flanked by FRT sites, in addition to loxP sites that flank functionally relevant exons ('floxed' exons). The FRT-flanked selection cassette is in general placed outside the loxP-flanked region, which renders these alleles directly compatible with dRMCE. Simultaneous expression of Cre and Flp recombinases induces cis recombination and formation of the deleted allele, which then serves as a 'docking site' at which to insert the replacement vector by trans recombination. The correctly replaced locus would encode the custom modification and a different drug-selection cassette flanked by single loxP and FRT sites. dRMCE therefore appears as a very efficient tool for targeted re-engineering of thousands of mouse alleles produced by the IKMC consortium.
Multiplexing setups rely on the fact that each F-Fn pair (consisting of a wildtype FRT site and a mutant called "n") or each Fn-Fm pair (consisting of two mutants, "m" and "n") constitutes a unique "address" in the genome. A prerequisite are differences in four out of the eight spacer positions (see Figure 1B). If the difference is below this threshold, some cross-interaction between the mutants may occur leading to a faulty deletion of the sequence between the heterospecific (Fm/Fn or F/Fn) sites.
13 FRT-mutants [ 3 ] [ 4 ] have meanwhile become available, which permit the establishment of several unique genomic addresses side-by side (for instance F-Fn and Fm-Fo). These addresses will be recognized by donor plasmids that have been designed according to the same principles, permitting successive (but also synchronous) modifications at the predetermined loci . These modifications can be driven to completion in case the compatible donor plasmid(s) are provided at an excess (mass-action principles). Figure 2 illustrates one use of the multiplexing principle: the stepwise extension of a coding region in which a basic expression unit is provided with genomic insulators , enhancers , or other cis -acting elements.
A recent variation of the general concept is based on PhiC31 (an integrase of the Ser-class) , which permits introduction of another RMCE target at a secondary site after the first RMCE-based modification has occurred. This is due to the fact that each phiC31-catalyzed exchange destroys the attP and attB sites it has addressed [ 2 ] converting them to att R and att L product sites, respectively. While these changes permit the subsequent mounting of new (and most likely remote) targets, they do not enable addressing several RMCE targets in parallel , nor do they permit "serial RMCE", i.e. successive, stepwise modifications at a given genomic locus.
This is different for Flp-RMCE, for which the post-RMCE status of FRT s corresponds to their initial state. This property enables the intentional, repeated mobilization of a target cassette by the addition of a new donor plasmid with compatible architecture. These "multiplexing-RMCE" options open unlimited possibilities for serial- and parallel specific modifications of pre-determined RMCE-targets [ 5 ]
Generation of transgenic knock-out/-in mice and their genetic modification by RMCE. [ 6 ] [ 7 ]
Insertion of a target cassette in a mammalian host cell line (CHO DG44 in suspension culture) and exchange with an ER stress reporter construct via targeted integration (RMCE). [ 8 ] | https://en.wikipedia.org/wiki/Recombinase-mediated_cassette_exchange |
Recombinase polymerase amplification (RPA) is a single tube, isothermal alternative to the polymerase chain reaction (PCR). [ 1 ] By adding a reverse transcriptase enzyme to an RPA reaction, it can detect RNA as well as DNA , without the need for a separate step to produce cDNA . [ 2 ] [ 3 ] [ 4 ] Because it is isothermal , RPA can use much simpler equipment than PCR, which requires a thermal cycler . Operating best at temperatures of 37–42 °C and still working, albeit more slowly, at room temperature means RPA reactions can in theory be run quickly by simply holding a tube in the hand. This makes RPA an excellent candidate for developing low-cost, rapid, point-of-care molecular tests. An international quality assessment of molecular detection of Rift Valley fever virus performed as well as the best RT-PCR tests, detecting less concentrated samples missed by some PCR tests and an RT-LAMP test. [ 5 ] RPA was developed and launched by TwistDx Ltd. (formerly known as ASM Scientific Ltd), a biotechnology company based in Cambridge, UK.
The RPA process employs three core enzymes – a recombinase , a single-stranded DNA-binding protein (SSB) and strand-displacing polymerase .
Recombinases are capable of pairing oligonucleotide primers with homologous sequence in duplex DNA. [ 1 ]
SSB bind to displaced strands of DNA and prevent the primers from being displaced.
Finally, the strand displacing polymerase begins DNA synthesis where the primer has bound to the target DNA.
By using two opposing primers, much like PCR, if the target sequence is indeed present, an exponential DNA amplification reaction is initiated. No other sample manipulation such as thermal or chemical melting is required to initiate amplification. At optimal temperatures (37–42 °C), the reaction progresses rapidly and results in specific DNA amplification from just a few target copies to detectable levels, typically within 10 minutes, for rapid detection of viral genomic DNA or RNA, [ 2 ] [ 3 ] [ 4 ] [ 6 ] [ 7 ] [ 8 ] pathogenic bacterial genomic DNA, [ 9 ] [ 10 ] as well as short length aptamer DNA. [ 11 ]
The three core RPA enzymes can be supplemented by further enzymes to provide extra functionality. Addition of exonuclease III allows the use of an exo probe for real-time, fluorescence detection akin to real-time PCR. [ 1 ] Addition of endonuclease IV means that an nfo probe can be used for lateral flow strip detection of successful amplification. [ 1 ] [ 6 ] [ 12 ] If a reverse transcriptase that works at 37–42 °C is added then RNA can be reverse transcribed and the cDNA produced amplified all in one step. Currently only the TwistAmp exo version of RPA is available with the reverse transcriptase included, although users can simply supplement other TwistAmp reactions with a reverse transcriptase to produce the same effect.
As with PCR, all forms of RPA reactions can be multiplexed by the addition of further primer/probe pairs, allowing the detection of multiple analytes or an internal control in the same tube.
RPA is one of several isothermal nucleic acid amplification techniques to be developed as a molecular diagnostic technique, frequently with the objective of simplifying the laboratory instrumentation required relative to PCR . A partial list of other isothermal amplification techniques include LAMP , NASBA , helicase-dependent amplification (HDA), and nicking enzyme amplification reaction (NEAR). The techniques differ in the specifics of primer design and reaction mechanism, and in some cases (like RPA) make use of cocktails of two or more enzymes. Like RPA, many of these techniques offer rapid amplification times with the potential for simplified instrumentation, and reported resistance to substances in unpurified samples that are known to inhibit PCR. With respect to amplification time, modern thermocyclers with rapid temperature ramps can reduce PCR amplification times to less than 30 minutes, particularly for short amplicons using dual-temperature cycling rather than the conventional three-temperature protocols. [ 13 ] In addition, the demands of sample prep (including lysis and extraction of DNA or RNA, if necessary) should be considered as part of the overall time and complexity inherent to the technique. These requirements vary according to the technique as well as to the specific target and sample type.
Compared to PCR, the guidelines for primer and probe design for RPA are less established, and may take a certain degree of trial and error, although recent results indicate that standard PCR primers can work as well. [ 14 ] The general principle of a discrete amplicon bounded by a forward and reverse primer with an (optional) internal fluorogenic probe is similar to PCR. PCR primers may be used directly in RPA, but their short length means that recombination rates are low and RPA will not be especially sensitive or fast. Typically 30–38 base primers are needed for efficient recombinase filament formation and RPA performance. This is in contrast to some other techniques such as LAMP which use a larger number of primers subject to additional design constraints. Although the original 2006 report of RPA describes a functional set of reaction components, the current (proprietary) formulation of the TwistAmp kit is "substantially different" [ 15 ] and is available only from the TwistDx supplier. This is in comparison to reaction mixtures for PCR which are available from many suppliers, or LAMP or NASBA for which the composition of the reaction mixture is freely published, allowing researchers to create their own customized "kits" from inexpensive ingredients.
Published scientific literature generally lacks detailed comparison of the performance of isothermal amplification techniques such as RPA, HDA, and LAMP relative to each other, often rather comparing a single isothermal technique to a "gold standard" PCR assay. This makes it difficult to judge the merits of these techniques independently from the claims of the manufacturers, inventors, or proponents. Furthermore, performance characteristics of any amplification technique are difficult to decouple from primer design: a "good" primer set for one target for RPA may give faster amplification or more sensitive detection than a "poor" LAMP primer set for the same target, but the converse may be true for different primer sets for a different target. An exception is a recent study comparing RT-qPCR, RT-LAMP, and RPA for detection of Schmallenberg virus and bovine viral diarrhea virus, [ 16 ] which effectively makes the point that each amplification technique has strengths and weaknesses, which may vary by the target, and that the properties of the available amplification techniques need to be evaluated in combination with the requirements for each application. As with PCR and any other amplification technique, there is obviously a publication bias, with poorly performing primer sets rarely deemed worthy of reporting. | https://en.wikipedia.org/wiki/Recombinase_polymerase_amplification |
Recombination hotspots are regions in a genome that exhibit elevated rates of recombination relative to a neutral expectation. The recombination rate within hotspots can be hundreds of times that of the surrounding region. [ 1 ] Recombination hotspots result from higher DNA break formation in these regions, and apply to both mitotic and meiotic cells. This appellation can refer to recombination events resulting from the uneven distribution of programmed meiotic double-strand breaks. [ 2 ]
Meiotic recombination through crossing over is thought to be a mechanism by which a cell promotes correct segregation of homologous chromosomes and the repair of DNA damages. Crossing over requires a DNA double-stranded break followed by strand invasion of the homolog and subsequent repair. [ 3 ] Initiation sites for recombination are usually identified by mapping crossing over events through pedigree analysis or through analysis of linkage disequilibrium . Linkage disequilibrium has identified more than 30,000 hotspots within the human genome. [ 3 ] In humans, the average number of crossover recombination events per hotspot is one crossover per 1,300 meioses, and the most extreme hotspot has a crossover frequency of one per 110 meioses. [ 4 ]
Recombination can also occur due to errors in DNA replication that lead to genomic rearrangements. These events are often associated with pathology. However, genomic rearrangement is also thought to be a driving force in evolutionary development as it gives rise to novel gene combinations. [ 5 ] Recombination hotspots may arise from the interaction of the following selective forces: the benefit of driving genetic diversity through genomic rearrangement coupled with selection acting to maintain favorable gene combinations. [ 6 ]
DNA contains "fragile sites" within the sequence that are more prone to recombination. These fragile sites are associated with the following trinucleotide repeats: CGG-CCG, GAG-CTG, GAA-TTC, and GCN-NGC. [ 5 ] These fragile sites are conserved in mammals and in yeast, suggesting that the instability is caused by something inherent to the molecular structure of DNA and is associated with DNA-repeat instability. [ 5 ] These fragile sites are thought to form hairpin structures on the lagging strand during replication from single-stranded DNA base-pairing with itself in the trinucleotide repeat region. [ 5 ] These hairpin structures cause DNA breaks that lead to a higher frequency of recombination at these sites. [ 5 ]
Recombination hotspots are also thought to arise due to higher-order chromosome structure that make some areas of the chromosome more accessible to recombination than others. [ 6 ] A double stranded-break initiation site was identified in mice and yeast, located at a common chromatin feature: the trimethylation of lysine 4 of histone H3 ( H3K4me3 ). [ 3 ]
Recombination hotspots do not seem to be solely caused by DNA sequence arrangements or chromosome structure. Alternatively, initiation sites of recombination hotspots can be coded for in the genome. Through the comparison of recombination between different mouse strains, locus Dsbc1 was identified as a locus that contributes to the specification of initiation sites in the genome in at least two recombination hotspot locations. [ 3 ] Additional crossing over mapping located the Dsbc1 locus to the 12.2 to 16.7-Mb region of mouse chromosome 17, which contains the PRDM9 gene. The PRDM9 gene encodes a histone methyltransferase in the Dsbc1 region, providing evidence of a non-random, genetic basis for recombination initiation sites in mice. [ 3 ] Rapid evolution of the PRDM9 gene explains the observation that human and chimpanzees share few recombination hotspots, despite a high level of sequence identity. [ 7 ]
Homologous recombination in functional regions of DNA is strongly stimulated by transcription , as observed in a range of different organisms. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Transcription associated recombination appears to be due, at least in part, to the ability of transcription to open the DNA structure and enhance accessibility of DNA to exogenous chemicals and internal metabolites that cause recombinogenic DNA damages . [ 10 ] These findings suggest that transcription-associated recombination may contribute significantly to recombination hotspot formation.
Homologous recombination is very frequent in RNA viruses. [ 12 ] Recombination frequently occurs among very similar viruses, where crossover sites may occur anywhere across the genome, but after selection pressure these sites tend to localize in certain regions/hotspots. [ 13 ] For example, in Enteroviruses, recombination hotspots have been identified at the 5'UTR-capsid region junction, and at the beginning of the P2 region. [ 14 ] These two hotspots flank the P1 region that encodes for the capsid. [ 14 ] In coronaviruses, the Spike genomic region is a recombination hotspot. [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Recombination_hotspot |
Recombineering (recombination-mediated genetic engineering) [ 1 ] is a genetic and molecular biology technique based on homologous recombination systems, as opposed to the older/more common method of using restriction enzymes and ligases to combine DNA sequences in a specified order. Recombineering is widely used for bacterial genetics, in the generation of target vectors for making a conditional mouse knockout , and for modifying DNA of any source often contained on a bacterial artificial chromosome (BAC), among other applications.
Although developed in bacteria, much of the inspiration for recombineering techniques came from methods first developed in Saccharomyces cerevisiae [ 2 ] where a linear plasmid was used to target genes or clone genes off the chromosome. In addition, recombination with single-strand oligonucleotides (oligos) was first shown in Saccharomyces cerevisiae . [ 3 ] Recombination was observed to take place with oligonucleotides as short as 20 bases.
Recombineering is based on homologous recombination in Escherichia coli mediated by bacteriophage proteins, either RecE/RecT from Rac prophage [ 4 ] or Redαβδ from bacteriophage lambda . [ 5 ] [ 6 ] The lambda Red recombination system is now most commonly used and the first demonstrations of Red in vivo genetic engineering were independently made by Kenan Murphy [ 7 ] and Francis Stewart. [ 4 ] [ 5 ] However, Murphy's experiments required expression of RecA and also employed long homology arms. Consequently, the implications for a new DNA engineering technology were not obvious. The Stewart lab showed that these homologous recombination systems mediate efficient recombination of linear DNA molecules flanked by homology sequences as short as 30 base pairs (40-50 base pairs are more efficient) into target DNA sequences in the absence of RecA. Now the homology could be provided by oligonucleotides made to order, and standard recA cloning hosts could be used, greatly expanding the utility of recombineering.
Recombineering utilizes linear DNA substrates that are either double-stranded (dsDNA) or single-stranded (ssDNA). Most commonly, dsDNA recombineering has been used to create gene replacements, deletions, insertions, and inversions. Gene cloning [ 6 ] [ 8 ] and gene/protein tagging (His tags etc., see [ 9 ] ) is also common. For gene replacements or deletions, usually a cassette encoding a drug-resistance gene is made by PCR using bi-partite primers. These primers consist of (from 5’→3’) 50 bases of homology to the target region, where the cassette is to be inserted, followed by 20 bases to prime the drug resistant cassette. The exact junction sequence of the final construct is determined by primer design. [ 10 ] [ 11 ] These events typically occur at a frequency of approximately 10 4 /10 8 cells that survive electroporation . Electroporation is the method used to transform the linear substrate into the recombining cell.
In some cases, one desires a deletion with no marker left behind, to make a gene fusion, or to make a point mutant in a gene. This can be done with two rounds of recombination. [ 12 ] In the first stage of recombineering, a selection marker on a cassette is introduced to replace the region to be modified. In the second stage, a second counterselection marker (e.g. sacB) on the cassette is selected against following introduction of a target fragment containing the desired modification. Alternatively, the target fragment could be flanked by loxP or FRT sites, which could be removed later simply by the expression of the Cre or FLP recombinases, respectively.
A novel selection marker "mFabI" was also developed to increase recombineering efficiency. [ 13 ]
Recombineering with ssDNA provided a breakthrough both in the efficiency of the reaction and the ease of making point mutations. [ 1 ] This technique was further enhanced by the discovery that by avoiding the methyl-directed mismatch repair system, the frequency of obtaining recombinants can be increased to over 10 7 /10 8 viable cells. [ 14 ] This frequency is high enough that alterations can now be made without selection. With optimized protocols, over 50% of the cells that survive electroporation contain the desired change. Recombineering with ssDNA only requires the Red Beta protein; Exo, Gamma and the host recombination proteins are not required. As proteins homologous to Beta and RecT are found in many bacteria and bacteriophages (>100 as of February 2010), recombineering is likely to work in many different bacteria. [ 15 ] Thus, recombineering with ssDNA is expanding the genetic tools available for research in a variety of organisms. To date, recombineering has been performed in E. coli , S. enterica , Y. pseudotuberculosis , S. cerevisiae and M. tuberculosis . [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ]
In the year 2010, it has been demonstrated that ssDNA recombination can occur in the absence of known recombination functions. [ 22 ] Recombinants were found at up to 10 4 /10 8 viable cells. This Red-independent activity has been demonstrated in P. syringae , E. coli , S. enterica serovar typhimurium and S. flexneria .
The biggest advantage of recombineering is that it obviates the need for conveniently positioned restriction sites , whereas in conventional genetic engineering, DNA modification is often compromised by the availability of unique restriction sites. In engineering large constructs of >100 kb, such as the Bacterial Artificial Chromosomes (BACs), or chromosomes, recombineering has become a necessity. Recombineering can generate the desired modifications without leaving any 'footprints' behind. It also forgoes multiple cloning stages for generating intermediate vectors and therefore is used to modify DNA constructs in a relatively short time-frame. The homology required is short enough that it can be generated in synthetic oligonucleotides and recombination with short oligonucleotides themselves is incredibly efficient. Recently, recombineering has been developed for high throughput DNA engineering applications termed 'recombineering pipelines'. [ 23 ] Recombineering pipelines support the large scale production of BAC transgenes and gene targeting constructs for functional genomics programs such as EUCOMM (European Conditional Mouse Mutagenesis Consortium) and KOMP (Knock-Out Mouse Program). Recombineering has also been automated, a process called "MAGE" -Multiplex Automated Genome Engineering, in the Church lab. [ 24 ] With the development of CRISPR technologies, construction of CRISPR interference strains in E. coli requires only one-step oligo recombineering, providing a simple and easy-to-implement tool for gene expression control. [ 12 ] [ 25 ] "Recombineering tools" and laboratory protocols have also been implemented for a number of plant species. These tools and procedures are customizable, scalable, and freely available to all researchers. [ 26 ] | https://en.wikipedia.org/wiki/Recombineering |
A recommended exposure limit ( REL ) is an occupational exposure limit that has been recommended by the United States National Institute for Occupational Safety and Health . [ 1 ] The REL is a level that NIOSH believes would be protective of worker safety and health over a working lifetime if used in combination with engineering and work practice controls, exposure and medical monitoring, posting and labeling of hazards, worker training and personal protective equipment . To formulate these recommendations, NIOSH evaluates all known and available medical, biological, engineering, chemical, trade, and other information. Although not legally enforceable limits, RELS are transmitted to the Occupational Safety and Health Administration (OSHA) or the Mine Safety and Health Administration (MSHA) of the U.S. Department of Labor for use in promulgating legal standards. [ 1 ] [ 2 ]
All RELs are located in the NIOSH Pocket Guide to Chemical Hazards , along with other key data for 677 chemical or substance groupings. The Pocket Guide is a source of general industrial hygiene information for workers, employers, and occupational health professionals. [ 1 ]
NIOSH recommendations are also published in a variety of documents, including:
In addition to these publications, NIOSH periodically presents testimony before various Congressional committees and at OSHA and MSHA rulemaking hearings. [ 1 ]
National Institute for Occupational Safety and Health (NIOSH) RELs are designed to protect the health and well-being of workers by recommending safe exposure levels. To really use these guidelines well, safety professionals need to understand the recommended exposure levels, how to measure them, and ways to make sure workers aren't exposed to harmful stuff. It's not just about knowing the numbers; it's also about checking regularly and making changes if needed to keep everyone safe.
RELs are written as time-weighted average (TWA) exposures. This TWA is calculated for a standard workday of up to 10 hours, over a 40-hour workweek. This is slightly different to permissible exposure limit (PELs), which are calculated for 8 hours over a 40-hour workweek instead. [ 1 ] NIOSH recognizes that certain scenarios demand more immediate attention and has therefore introduced additional measures. Sometimes it's not always about the whole day and a worker might be around a lot of the stuff in a short burst. So, they set a short-term exposure limit (STEL), meaning the concentration of a substance that should never be exceeded within a specified 15-minute period. There is also a ceiling limit (C). This is a substance no one should be exposed to, even for a moment. These guidelines aim to strike a balance; it is to keep workers safe from harm without going overboard and making things inefficient in the workplace. The RELs, unlike permissible exposure limits or PELs set by OSHA , are merely guidelines -- they are not legally enforceable. An employer cannot be held liable, but they can try to implement these in the workplace. These guidelines are crucial for workplace safety. RELs undergo more frequent revisions and tend to be more stringent compared to PELs established by OSHA. [ 3 ] The strictness inherent in RELs align with the latest scientific understanding and advancements in occupational health. [ 1 ] This prioritizes knowledge in worker safety. | https://en.wikipedia.org/wiki/Recommended_exposure_limit |
Recompose is a public benefit corporation founded by designer and death care advocate Katrina Spade in 2017, [ 1 ] building upon her 2014 non-profit organization Urban Death Project .
Recompose is a Washington state based company offering a death care service to convert human bodies into soil through a process known as natural organic reduction, or human composting . The process, which takes about 30 days, [ 2 ] is marketed as a green alternative to the existing disposal options of cremation and burial. [ 1 ] [ 3 ]
Recompose has a patent pending process where bodies are placed in a vessel with natural materials such as wood chips and alfalfa. [ 3 ] A fan system is set up to provide air that ensures enough oxygen is getting to the body, and the soft tissue [ 4 ] breaks down in about a month, transforming into about two wheelbarrows worth of soil. [ 3 ] Families of the deceased can keep the soil, use it to plant a tree, or through a partnership with Forterra , Washington's largest land conservation organization, can donate soil to help rehabilitate forest land in Washington State. [ 5 ]
To prove natural organic reduction as safe and effective, Recompose participated in a study with Western Washington University designed and managed by soil scientist Lynne Carpenter-Boggs. [ 6 ] Six donors participated in the study and Boggs, who is working for recompose as a paid advisor, [ 7 ] indicated the result "was clean, rich, odorless soil that passed all federal and state safety guidelines for potentially hazardous pathogens and pollutants, such as metals". [ 6 ]
Recompose founder Spade was raised in rural New Hampshire by a family who wasn't religious , but found spirituality in nature. [ 8 ] [ 9 ]
Considering her own mortality Spade wanted more options that were less toxic, [ 10 ] environmentally and economically friendly, [ 11 ] and options that allowed family and friends to participate in the care of their loved one. [ 12 ] She formulated early ideas about the possibility of human recomposition but when she learned about the practice of livestock mortality composting , she began work to create the same option for humans. [ 13 ]
Katrina Spade was awarded the Echoing Green Climate Fellowship for this work in 2014. [ 14 ]
Urban Death Project was founded in 2014. [ 9 ] It formed a partnership with Western Carolina University 's Forensic Anthropology Department. [ 15 ]
Urban Death Project's Kickstarter Campaign raised $91,000 from over 1200 Backers in 2015. [ 16 ]
Research began in 2016 with Washington State University's Soil Science Department led by Lynne-Carpenter Boggs, PhD, Associate Professor of Sustainable and Organic Agriculture., [ 11 ] while law students at Wake Forest University school of law examined the legal hurdles. [ 11 ]
In 2017 Urban Death Project's Western Carolina University Forensic Anthropology partnership was featured in Caitlin Doughty 's bestseller From Here to Eternity; Traveling the World to Find the Good Death .
In 2017 the non-profit Urban Death Project dissolved and Recompose (a benefit corporation ) was founded.
After Washington State legalized natural organic reduction in 2020, Recompose opened its first facility on December 20, 2020 [ 17 ] outside of Seattle, Washington . The original plan for a 18,500-square-foot facility, designed by architecture firm Olson Kundig , housing 75 vessels "arranged to surround a large, airy gathering space [ 18 ] were put on hold due to COVID 19. [ 19 ] Instead, a much smaller location which Spade describes as “a workhorse facility” that holds only 10 vessels and has no public-gathering space opened in Kent, Washington . [ 19 ] However, friends and chosen family of the deceased can watch that laying-in process over a livestream. [ 17 ] | https://en.wikipedia.org/wiki/Recompose |
Recon Instruments was a Canadian technology company that produced smartglasses and wearable displays marketed by the company as "heads-up displays" for sports. (However, none of Recon's products contained a transparent display element delivering actual see-through capability and can thus be considered heads-up displays in the true meaning of the term.) Recon's products delivered live activity metrics, GPS maps, and notifications directly to the user's eye. Recon's first heads-up display offering was released commercially in October 2010, [ 1 ] roughly a year and a half before Google introduced Google Glass . [ 2 ]
Recon received investments from companies including Motorola Solutions and Intel . [ 3 ] [ 4 ] It also partnered with enterprise software vendors in order to make its latest smart eyewear device, the Jet, suitable for industrial applications. [ 5 ] [ 6 ]
On June 17, 2015, Recon was acquired by Intel . [ 7 ] [ 8 ] Recon then described itself as "an Intel company." [ 9 ]
In June 2017, Intel announced that all remaining Recon Instruments products were going to be discontinued by the end of the year. [ 10 ] According to a Bloomberg report in October 2017, Intel had in fact completely closed its Recon Instruments division already in early summer 2017. [ 11 ]
The technology behind Recon Instruments' products was born in September 2006 from an integrated MBA project. That project was undertaken by co-founders Dan Eisenhardt, Hamid Abdollahi, Fraser Hall, and Darcy Hughes at the University of British Columbia , Robert H. Lee Sauder School of Business .
Recon Instruments incorporated in January 2008, operating from small office and lab spaces rented from the University of British Columbia. In April 2010, the company moved to its current headquarters in the Yaletown area of downtown Vancouver . [ 12 ] As of March 2015, Recon is still led by co-founders Dan Eisenhardt and Hamid Abdollahi. [ 9 ]
Recon's co-founders originally looked into developing a HUD product for swimmers . Eisenhardt, a competitive swimmer himself, believed a HUD would be a valuable replacement for the clock at the side of the pool. Eisenhardt and his fellow founders developed the idea while studying at the University of British Columbia. However, a patent already existed for swimming goggles with a heads-up display. Because of that patent and the challenges presented by the technology's small form factor and intended operating conditions, the team eventually chose to focus on a winter sports product. [ 12 ] The co-founders subsequently turned this school project into their first retail product, which was distributed globally in October 2010. [ 1 ]
Recon has received investments from both venture-capital firms and other technology companies. [ citation needed ]
In January 2012, Recon received $10 million in Series A funding from Vanedge Capital and Kopin Corporation . Vanedge Capital is a Canadian venture capital firm that specializes in "interactive entertainment and digital media businesses." Kopin Corporation is a U.S. firm known for microdisplays aimed at mobile electronics. [ 13 ]
In September 2013, Intel Capital , the venture capital arm of Intel , announced that it had invested in Recon. Details of the deal were not disclosed. However, the announcement described wearables as "an area of significant focus" for Intel Capital, and it said the investment would allow Recon to "accelerate product development, marketing and global sales, as well as gain access to Intel Capital's expertise in manufacturing, operations and technology." [ 4 ]
In April 2014, Motorola Solutions announced an investment in Recon. Motorola Solutions describes itself as a provider of communications equipment for "government and enterprise customers." The terms of the deal were not made public. In July 2014, Motorola Solutions demonstrated a Recon product as a piece of kit for law enforcement personnel . [ 3 ]
On June 17, 2015, Recon was acquired by Intel . [ 7 ] The value of the deal was initially reported to be as high as Can$ 175 million. [ 14 ] However, this sum was not confirmed by Recon Instrument's Dan Eisenhardt, [ 15 ] and was later generally considered inaccurate. [ 16 ] [ 17 ] [ 11 ] After the acquisition, Recon stayed in Vancouver and planned to make use of Intel's technological resources in order to "develop smart device platforms for a broader set of customers and market segments." [ 8 ]
In June 2017, it became known that Intel intended to discontinue all remaining Recon Instruments products, i.e., Recon Jet and Recon Jet Pro. [ 10 ] Around the same time, Recon Instruments ceased all activities on both social media and its own website.
According to a Bloomberg report in October 2017, Intel had in fact completely closed its Recon Instruments division already in early summer 2017. [ 11 ]
Recon's first products were smart goggles and what the company marketed (incorrectly) as " heads-up displays " aimed at the winter sports market. More recently, the company broadened its focus with the Jet, a smart eyewear device designed for activities like cycling and running .
All of Recon Instrument's products were essentially head-worn, self contained mobile devices equipped with GPS and environmental sensors. A near-eye display was provided in the form of a single non-translucent (solid) micro display situated below and to the side of one eye. This required the wearer to glance down and to the side in order to read the screen contents. Recon's head-worn displays were therefore peripheral head-mounted displays rather than head-up displays in the common meaning of the term; much less were they able to deliver an augmented reality experience due to their lack of see-through capabilities.
Recon's first commercial product, the Transcend, was released in October 2010. It was designed for winter sports and featured a small LCD screen embedded into a snow goggle frame by eyewear maker Zeal Optics, which is now a subsidiary of Maui Jim , Inc. [ 18 ] The Transcend displayed data like GPS maps, temperature, speed, and altitude, and it allowed users to share that data. [ 1 ] In 2011, the Transcend earned the Consumer Electronics Show 's Best of Innovations award for Personal Electronics. [ 19 ]
Recon's MOD and MOD Live heads-up displays were released in November 2011. Unlike the Transcend, the MOD and MOD Live were sold separately from snow goggles. Users could fit them into specially designed "Recon-Ready" goggles from eyewear makers including Uvex, Alpina, and Briko. [ 20 ] Oakley also integrated the MOD Live into a specially designed snow goggle frame and marketed the resulting product as the Airwave. [ 21 ]
Both the MOD and MOD Live offered functionality similar to the Transcend's, but the MOD Live introduced the ability to connect to smartphones via Bluetooth . When connected to a user's smartphone, the MOD Live could display caller ID and SMS notifications. [ 20 ]
Introduced in November 2013, the Snow2 is Recon's latest standalone heads-up display. It features a faster processor than the MOD and MOD Live along with improved display brightness and contrast, longer battery life, 802.11a/b/g/n Wi-Fi connectivity, and Made for iPhone (MFi) certification.
Like the MOD Live, the Snow2 can connect to smartphones in order to display call and SMS notifications. It also lets users connect to Facebook [ 22 ] and track their friends using the GPS-enabled maps feature.
The Snow2 heads-up display is designed to fit inside compatible eyewear from Oakley , Smith, Scott, Uvex, Alpina, Briko, and Zeal. [ 23 ] Oakley has integrated the Snow2 into a snow goggle frame and markets the resulting product as the Airwave 1.5. [ 24 ] Despite running an Android -based operating system, [ 25 ] the Airwave 1.5 is sold by Apple through both Apple retail stores and the online Apple Store. [ 26 ] [ 27 ]
Unlike the Snow2, the Jet combines a heads-up display with a Recon-designed sunglass frame and polarized lenses.
The Jet is aimed at activities like cycling and running rather than winter sports. Recon has also partnered with enterprise software firms SAP [ 5 ] and APX Labs [ 6 ] with the aim of making Jet suitable for industrial applications in fields like manufacturing and oil-and-gas extraction . Motorola Solutions , one of Recon's investors, has also demonstrated [ 28 ] the Jet as law-enforcement equipment, as well.
Built into the Jet are GPS connectivity as well as sensors to track metrics like speed, pace, distance, and elevation gain. Users can also connect third-party sensors via ANT+ and smartphones via Bluetooth . Like the MOD Live and Snow2, the Jet can display call and SMS notifications from user's smartphones.
The Jet is powered by a 1 GHz processor with dual ARM Cortex-A9 cores. Its processor, display, and camera sit on the right side of the frame, while the battery sits on the left, evening out weight distribution. The battery is designed to be interchangeable, as well. [ 29 ]
Recon devices run ReconOS, an operating system based on Android . [ 25 ]
ReconOS has a custom user interface designed for small displays. It shows live activity metrics and lets users share those metrics to social media. ReconOS also features GPS maps that display the locations of nearby friends and rotate depending on the user's head orientation. When a Recon device is paired with a smartphone, ReconOS can display call and SMS notifications, and it allows users to control the phone's music playback. [ 30 ]
ReconOS runs third-party applications, as well. Developers can write ReconOS apps using the Recon SDK. [ 31 ]
The Recon Engage website allows users to browse, display, and share activity metrics recorded with a Recon device. Users can also tag friends, share photos, download software updates and third-party applications for their Recon device, and see their activities mapped in an embedded Google Maps pane. [ 32 ] [ 33 ]
Available for iOS and Android , the Recon Engage mobile app lets users view and share their activity metrics, and it also allows compatible Recon devices to connect to smartphones. Connecting a Recon device to a smartphone enables features like friend tracking, call and SMS notification display, and music playback controls. [ 34 ] [ 35 ]
The Recon Uplink desktop application lets users register their Recon device, update the device's software, and sync data from the device to an Engage account. [ 36 ] When used with Jet, the Uplink application can download photos from the device to the user's computer. [ 37 ]
Aimed at developers, the Recon SDK includes the tools, documentation, and samples necessary to write third-party applications for Recon's Jet and Snow2 devices. [ 31 ] The Recon SDK API augments the Android API with extensions specific to Recon device hardware. [ 38 ] Developers do not need to register or to pay a fee to access the Recon SDK.
By visiting the app center on Recon's Engage website, users can download third-party apps for Recon's Jet and Snow2 products. Among the apps on offer are Refuel, a "smart nutrition" app that tells users when to eat and rehydrate during activities, and MyGoproRemote2, which makes it possible to control GoPro cameras using a Jet or Snow2. [ 39 ]
The flagship product of Recon Instruments, Recon Jet, launched in 2015 to mixed reviews, with Engadget calling the goggles "expensive fitness glasses with potential to be better". [ 40 ] Reviewers praised Recon Instruments for bringing the first fitness-oriented head-worn displays to market. Frequently voiced criticisms were the high price point, insufficient battery life, wearer distraction and limited field of view by the non-see-through (solid) micro display, unsatisfactory GPS lag and accuracy, complex user interface, and general software problems. [ 41 ] [ 42 ] [ 43 ] | https://en.wikipedia.org/wiki/Recon_Instruments |
Reconciliation ecology is the branch of ecology which studies ways to encourage biodiversity in the human-dominated ecosystems of the anthropocene era. Michael Rosenzweig first articulated the concept in his book Win-Win Ecology , [ 2 ] based on the theory that there is not enough area for all of earth's biodiversity to be saved within designated nature preserves . Therefore, humans should increase biodiversity in human-dominated landscapes. By managing for biodiversity in ways that do not decrease human utility of the system, it is a " win-win " situation for both human use and native biodiversity. The science is based in the ecological foundation of human land-use trends and species-area relationships. It has many benefits beyond protection of biodiversity, and there are numerous examples of it around the globe. Aspects of reconciliation ecology can already be found in management legislation, but there are challenges in both public acceptance and ecological success of reconciliation attempts.
Traditional conservation is based on "reservation and restoration"; reservation meaning setting pristine lands aside for the sole purpose of maintaining biodiversity, and restoration meaning returning human impacted ecosystems to their natural state. However, reconciliation ecologists argue that there is too great a proportion of land already impacted by humans for these techniques to succeed.
While it is difficult to measure exactly how much land has been transformed by human use, estimates range from 39 to 50%. This includes agricultural land, pastureland , urban areas, and heavily harvested forest systems. [ 3 ] An estimated 50% of arable land is already under cultivation. [ 4 ] Land transformation has increased rapidly over the last fifty years, and is likely to continue to increase. [ 5 ] Beyond direct transformation of land area, humans have impacted the global biogeochemical cycles , leading to human caused change in even the most remote areas. [ 6 ] These include addition of nutrients such nitrogen and phosphorus , acid rain , ocean acidification , redistribution of water resources, and increased carbon dioxide in the atmosphere. Humans have also changed species compositions of many landscapes that they do not dominate directly by introducing new species or harvesting native species. This new assemblage of species has been compared to previous mass extinctions and speciation events caused by formation of land bridges and colliding of continents. [ 7 ]
The need for reconciliation ecology was derived from patterns of species distribution and diversity. The most relevant of these patterns is the species-area curve which states that a larger geographic area will contain higher species diversity. This relationship has been supported by so large a body of research that some scholars consider it to be an ecological law. [ 8 ]
There are two main reasons for the relationship between number of species and area, both of which can be used as an argument for conservation of larger areas. The habitat heterogeneity hypothesis claims that a larger geographic area will have a greater variety of habitat types, and therefore more species adapted to each unique habitat type. Setting aside a small area will not encompass enough habitat variety to contain a large variety of species. [ 9 ] The equilibrium hypothesis draws from the theory of island biogeography as described by MacArthur and Wilson . [ 10 ] Large areas have large populations, which are less likely to go extinct through stochastic processes. The theory assumes that speciation rates are constant with area, and a lower extinction rate coupled with higher speciation leads to more species.
The species-area relationship has often been applied to conservation, often quantitatively. The simplest and most commonly used formula was first published by Frank W. Preston . [ 11 ] The number of species present in a given area increases in relationship to that area with the relationship S = cA z where S is the number of species, A is the area, and c and z are constants which vary with the system under study. This equation has frequently been used for designing reserve size and placement (see SLOSS debate ). [ 12 ] The most common version of the equation used in reserve design is the formula for inter-island diversity, which has a z-value between 0.25 and 0.55, [ 13 ] meaning protecting 5% of the available habitat will preserve 40% of the species present. However, inter-provincial species area relationships have z-values closer to 1, meaning protecting 5% of habitat will only protect 5% of species diversity. [ 2 ]
Taken together, proponents of reconciliation ecology see the species-area relationship and human domination of a large percentage of the earth's area as a sign that we will not be able to set aside enough land to protect all of life's biodiversity. There can be negative effects of setting land aside because it means the remaining land is used more intensely. [ 4 ] For example, less land is required for crop production when high levels of inorganic fertilizer is applied, but these chemicals will affect nearby land set aside for natural ecosystems. The direct benefits of land transformation for the growing world population often make it ethically difficult to justify the tradeoff between biodiversity and human use. [ 14 ] Reconciled ecosystems are ones in which humans dominate, but natural biodiversity is encouraged to persist within the human landscape. Ideally, this creates a more sustainable socio-ecological system and does not necessitate a trade off between biodiversity and human use.
How can understanding of species' natural history aid their effective conservation in human-dominated ecosystems? Humans often conduct activities that allow for the incorporation of other species, whether as a by-product or as a result of a focus on nature. [ 15 ] Traditional natural history can only inform how best to do this to a certain degree, because landscapes have been changed so dramatically. However, there is much more to learn through direct study of species' ecology in human-dominated ecosystems , through what is known as focused natural history. Rosenzweig [ 15 ] cites four examples: shrikes (Laniidae) thrived in altered landscapes when wooden fence post perches allowed them easy access to pouncing on prey, but inhospitable steel fence posts contributed to their decline. Replacing steel fence posts with wood fence posts reverses the shrikes' decline and allows humans to determine the reasons for the distribution and abundance of shrikes. Additionally, the cirl bunting ( Emberiza cirlus ) thrived on farms when fields alternated between harvests and hay, but declined where farmers began to plant winter grain crops, natterjack toads ( Bufo calamatus ) declined when reductions in sheep grazing ceased to alter ponds to their preferred shape and depth, and longleaf pine ( Pinus palustris ) declined in the Southeastern United States when lack of wildfires prevented its return after timbering . [ 15 ] [ 16 ] Thus, applying focused natural history in human-dominated landscapes can contribute to conservation efforts.
The emerging concept of ecosystem services (coined by the Millennium Ecosystem Assessment in 2005) changed the way ecologists perceived so-called "ordinary species" : as abundant species represent the bulk of biomass and biological processes, even if they don't appear directly threatened their conservation constitutes as a major concern for maintaining these services on which rely both human societies and rarer species. [ 17 ] Reconciliation ecology then proposes to take care of such species and to maintain (or restore) ecological processes in human-dominated ecosystems, hence creating ecological corridors and preserving a good functioning of biological cycles. [ 17 ]
Reconciliation ecologists believe increasing biodiversity within human dominated landscapes will help to save global biodiversity. This is sometimes preferable to traditional conservation because it does not impair human use of the landscape and therefore may be more acceptable to stakeholders. [ 2 ] However, not only will it encourage biodiversity in the areas where it takes place, but many scholars cite other benefits of including biodiversity in human landscapes on both global conservation activities and human well-being.
Increasing wildlife habitat in human-dominated systems not only increases in situ biodiversity, it also aids in conservation of surrounding protected areas by increasing connectivity between habitat patches. [ 18 ] [ 19 ] This may be especially important in agricultural systems where buffers, live fences, and other small habitat areas can serve as stops between major preserves. [ 20 ] This concept forms the basis of the subdiscipline countryside biogeography [ 14 ] which studies the potential of the matrix between preserves to provide habitat for species moving from preserve to preserve.
Placing importance on native ecosystems and biodiversity within human landscapes increases human exposure to natural areas, [ 21 ] which has been shown to increase appreciation of nature. Studies have shown that students who participate in outdoor education programs show a greater understanding of their environment, greater willingness to act in order to save the environment, and even a greater enthusiasm for school and learning. [ 22 ] [ 23 ] Green spaces have also been shown connect urban dwellers of all ages with nature, even when dominated by invasive species . [ 24 ] Reconnecting people with nature is especially important for conservation because there is a tendency for people to use the biodiversity present in the landscape they grew up in as a point of comparison for future trends (see Shifting baseline ). [ 25 ]
The results of reconciliation ecology can also improve human well-being. E. O. Wilson has hypothesized that humans have an innate desire to be close to nature (see Biophilia ), [ 26 ] and numerous studies have linked natural settings to decreased stress and faster recovery during hospital stays. [ 27 ]
Many examples of native plants and animals taking advantage of human dominated landscapes have been unintentional, but may be enhanced as part of reconciliation ecology. Others are intentional redesigns of human landscapes to better accommodate native biodiversity. These have been going on for many hundreds of years including examples within agricultural systems, urban and suburban systems, marine systems, and even industrial areas.
While Rosenzweig formalized the concept, humans have been encouraging biodiversity within human landscapes for millennia. In the Trebon Biosphere Reserve of the Czech Republic , a system of human-engineered aquaculture ponds built in the 1500s not only provides a profitable harvest of fish, but also provides habitat for a hugely diverse wetland ecosystem. Many cities in Europe take pride in their local population of storks , which nest on roofs or in church towers that replace the trees they would naturally nest in. [ 2 ] There are records of humans maintaining plants in pleasure gardens as early as ancient Mesopotamia , with an especially strong tradition of incorporating gardens into the architecture of human landscapes in China . [ 28 ]
Agroforestry provides many examples of reconciliation ecology at work. In tropical agroforestry systems, crops such as coffee or fruit trees are cultivated under a canopy of shade trees, providing habitat for tropical forest species outside of protected areas. [ 29 ] For example, shade-grown coffee plantations typically have lower tree diversity than unmanaged forests, however they have much higher tree species diversity and richness than other agricultural methods. [ 30 ] Agriculture that mimics nature, encourages natural forest species along with the crops, and also takes pressure off nearby uncultivated forest areas where people are allowed to collect forest products. [ 29 ] The understory can also be managed with reconciliation ecology: allowing weeds to grow among crops (minimizing labor and preventing the invasion of noxious weed species) and leaving fallowlands alongside farmed areas can enhance understory plant richness with associated benefits for native insects and birds compared to other agricultural practices. [ 31 ]
The oil palm ( Elaeis guineensis ) provides another example of the potential of reconciliation ecology. It is one of the most important and rapidly expanding tropical crops, [ 32 ] so lucrative because it is used in many products throughout the world. Unfortunately, oil-palm agriculture is one of the main drivers of forest conversion in Southeast Asia and is devastating for native biodiversity , perhaps even more so than logging. [ 33 ] However, attempts are being made to foster the sustainability of this industry. As a monoculture , oil palm is subject to potentially devastating attacks from insect pests. [ 32 ] [ 34 ] Many companies are attempting an integrated pest management approach which encourages the planting of species that support predators and parasitoids of these insect pests, as well as an active native bird community. [ 34 ] Experiments have shown that a functioning bird community, especially at higher densities, can serve to reduce insect herbivory on oil palms, promoting increased crop yields and profits. [ 34 ] Thus, oil palm plantation managers can participate in reconciliation ecology by promoting local vegetation that is beneficial to insectivorous birds, including maintaining ground plants that serve as nesting sites, thereby protecting natural communities. Additionally, steps such as maintaining riparian buffer zones or natural forest patches can help to slow the loss of biodiversity within oil palm plantation landscapes. [ 33 ] By engaging in these environmentally friendly practices, fewer chemicals and less effort are required to maintain both plantation productivity and ecosystem services . [ 32 ] [ 34 ]
There are many grazing practices that also encourage native biodiversity. In Rosenzweig's book he uses the example of a rancher in Arizona who intentionally deepened his cattle ponds in order to save a population of threatened leopard frogs ( Rana chiricahuensis ), with no detriment to the use of those tanks for cattle, [ 2 ] and a similar situation has occurred with the vulnerable California tiger salamander ( Ambystoma californiense ) in the Central Valley of California. Research has shown that without cattle grazing, many of the remaining vernal pools would dry too early for the salamanders to complete their life cycle under global climate change predictions. [ 35 ] In Central America, a large percentage of pastureland is fenced using live trees which are not only low maintenance for the farmer, but also provide habitat for birds, bats, and invertebrates which cannot persist in open pastureland. [ 36 ] Another example from Rosenzweig involves encouraging loggerhead shrikes ( Lanius ludovicianus ) to populate pastureland by placing perches around the pasture. [ 2 ] These are all simple, low-cost ways to encourage biodiversity without negatively impacting the human uses of the landscape.
Urban ecology can be included under the umbrella of reconciliation ecology and it tackles biodiversity in cities, the most extreme of human-dominated landscapes. Cities occupy less than 3% of global surface area, but are responsible for a majority of carbon emissions, residential water use, and wood use. [ 37 ] Cities also have unique climatic conditions such as the urban heat island effect, which can greatly affect biodiversity. [ 38 ] There is a growing trend among city managers to take biodiversity into account when planning city development, especially in rapidly growing cities. Cities often have surprisingly high plant biodiversity due to their normally high degree of habitat heterogeneity and high numbers of gardens and green spaces cultivated to include a large variety of species. [ 38 ] However, these species are often not native, and a large part of the total urban biodiversity is usually made up of exotic species. [ 39 ]
Because cities are so highly impacted by human activities, restoration to the pristine state is not possible, however there are modifications that can be made to increase habitat without negatively impacting human needs. In urban rivers, addition of large woods and floating islands to provide habitat, modifications to walls and other structures to mimic natural banks, and buffer areas to reduce pollutants can all increase biodiversity without reducing the flood control and water supply services. [ 40 ] Urban green spaces can be re-designed to encourage natural ecosystems rather than manicured lawns , as is seen in the National Wildlife Federation ’s Backyard Wildlife Habitat program. [ 41 ] Peregrine falcons ( Falco peregrinus ), which were once endangered by pesticide use, are frequently seen nesting in tall urban buildings throughout North America, feeding chiefly on the introduced rock dove . [ 42 ] The steep walls of buildings mimic the cliffs peregrines naturally nest in and the rock doves replace the native prey species that were driven out of urban areas.
In Florida, the Florida manatee ( Trichechus manatus latirostris ) uses warm water discharged from power plants as a refuge when the temperature of the Gulf of Mexico drops. [ 43 ] These warm areas replace the warm springs that manatees once naturally used in the winter. These springs have been drained or cut off from open water by human uses. American crocodiles ( Crocodylus acutus ) have a similar habitat in the cooling canals of the Turkey Point power plant, where an estimated 10% of the total North American population of the species lives. [ 2 ]
Wastewater treatment systems have shown potential for reconciliation ecology on numerous occasions. Man-made wetlands designed to remove nitrogen before runoff from agriculture enters the Everglades in Florida are used as breeding sites for a number of birds, including the endangered wood stork ( Mycteria americana ). [ 44 ] Stormwater treatment ponds can provide important breeding habitat for amphibians, especially where natural wetlands have been drained by human development. [ 45 ]
Coral reefs have been intensively impacted by human use, including overfishing and mining of the reef itself. One reconciliation approach to this problem is building artificial reefs that not only provide valuable habitat for aquatic species, but also protect nearby islands from storms when the natural structure has been mined away. [ 46 ] Even structures as simple as scrap metal and automobiles can be used as habitat, providing added benefits of freeing space in landfills. [ 47 ]
Governmental intervention can aid in encouraging private landowners to create habitat or otherwise increase biodiversity on their land. The United States' Endangered Species Act requires landowners to halt any activities negatively affecting endangered species on their land, which is a disincentive for them to encourage endangered species to settle on their land in the first place. [ 2 ] To help mediate this problem, the US Fish and Wildlife Service has instituted safe harbor agreements whereby the landowner engages in restoration on their land to encourage endangered species, and the government agrees not to place further regulation on their activities should they want to reverse the restoration at a later date. [ 48 ] This practice has already led to an increase in aplomado falcons ( Falco femoralis ) in Texas and red-cockaded woodpeckers ( Picoides borealis ) in the Southeastern US.
Another example is the US Department of Agriculture ’s Conservation Reserve Program (CRP). The CRP was originally put in place to protect soil from erosion, but also has major implications for conservation of biodiversity. In the program, landowners take their land out of agricultural production and plant trees, shrubs, and other permanent, erosion controlling vegetation. Unintended, but ecologically significant consequences of this were the reduction of runoff, improved water quality, creation of wildlife habitat, and possible carbon sequestration . [ 49 ]
While reconciliation ecology attempts to modify the human world to encourage biodiversity without negatively impacting human use, there are many challenges in obtaining broad acceptance of the idea. For example, the addition of forest corridors to urban river systems, which improves water quality and enhances critical habitat structure for aquatic invertebrates and fish may be seen as 'wasting' valuable real estate. [ 40 ] Similarly, many suburban areas do not allow native vegetation that provides useful wildlife habitat because it is perceived as "untidy", reflects an apathetic attitude, and may reduce property values. [ 50 ] In addition, many humans have negative feelings toward certain species, especially predators such as coyotes and wolves, which are often based more on perceived risk than actual risk of loss or injury resulting from the animal. [ 51 ] Even with cooperation of the human element of the equation, reconciliation ecology can not help every species. Some animals, such as several species of waterfowl , show strong avoidance behaviors toward humans and any form of human disturbance. [ 52 ] No matter how nice an urban park is built, the proximity of humans will scare away some birds. Other species must maintain large territories, and barriers that abound in human habitats, such as roads, will stop them from coexisting with humans. [ 53 ] These animals will require undisturbed land set aside for them.
There is hence a double social challenge for reconciliation ecology : making people's perception of biodiversity evolve, and then changing relating norms and policies so as to better consider biodiversity as a positive component in our habitat. [ 17 ] | https://en.wikipedia.org/wiki/Reconciliation_ecology |
Reconfigurability denotes the Reconfigurable Computing capability of a system, so that its behavior can be changed by reconfiguration, i. e. by loading different configware code. This static reconfigurability distinguishes between reconfiguration time and run time. Dynamic reconfigurability denotes the capability of a dynamically reconfigurable system that can dynamically change its behavior during run time, usually in response to dynamic changes in its environment.
In the context of wireless communication dynamic reconfigurability tackles the changeable behavior of wireless networks and associated equipment, specifically in the fields of radio spectrum , radio access technologies, protocol stacks, and application services.
Research regarding the (dynamic) reconfigurability of wireless communication systems is ongoing for example in working group 6 of the Wireless World Research Forum (WWRF), in the Wireless Innovation Forum (WINNF) (formerly Software Defined Radio Forum), and in the European FP6 project End-to-End Reconfigurability (E²R). Recently, E²R initiated a related standardization effort on the cohabitation of heterogeneous wireless radio systems in the framework of the IEEE P1900.4 Working Group.
See cognitive radio .
In the context of Control reconfiguration , a field of fault-tolerant control within control engineering , reconfigurability is a property of faulty systems meaning that the original control goals specified for the fault-free system can be reached after suitable control reconfiguration. | https://en.wikipedia.org/wiki/Reconfigurability |
A reconfigurable antenna is an antenna capable of modifying its frequency and radiation pattern dynamically, in a controlled and reversible manner. [ 2 ] In order to provide a dynamic response, reconfigurable antennas integrate an inner mechanism (such as RF switches , varactors , mechanical actuators or tunable materials ) that enable the intentional redistribution of the RF currents over the antenna surface and produce reversible modifications of its properties. Reconfigurable antennas differ from smart antennas because the reconfiguration mechanism lies inside the antenna, rather than in an external beamforming network. The reconfiguration capability of reconfigurable antennas is used to maximize the antenna performance in a changing scenario or to satisfy changing operating requirements.
Reconfigurable antennas can be classified according to the antenna parameter that is dynamically adjusted, typically the frequency of operation , radiation pattern or polarization . [ 3 ]
Frequency reconfigurable antennas can adjust their frequency of operation dynamically. They are particularly useful in situations where several communications systems converge because the multiple antennas required can be replaced by a single reconfigurable antenna. Frequency reconfiguration is generally achieved by physical or electrical modifications to the antenna dimensions using RF-switches, [ 4 ] impedance loading [ 5 ] or tunable materials. [ 6 ]
Radiation pattern reconfigurability is based on the intentional modification of the spherical distribution of the radiation pattern . Beam steering is the most extended application and consists of steering the direction of maximum radiation to maximize the antenna gain in a link with mobile devices. Pattern reconfigurable antennas are usually designed using movable/rotatable structures [ 7 ] [ 8 ] or switchable and reactively-loaded parasitic elements. [ 9 ] [ 10 ] [ 11 ] In the last 10 years, metamaterial-based reconfigurable antennas have gained attention due their small form factor, wide beam steering range and wireless applications. [ 12 ] [ 13 ] Plasma antennas have also been investigated as alternatives with tunable directivities. [ 14 ] [ 15 ] [ 16 ]
Polarization reconfigurable antennas are capable of switching between different polarization modes. The capability of switching between horizontal, vertical and circular polarizations can be used to reduce polarization mismatch losses in portable devices. Polarization reconfigurability can be provided by changing the balance between the different modes of a multimode structure. [ 17 ]
Compound reconfiguration is the capability of simultaneously tuning several antenna parameters, for instance frequency and radiation pattern. The most common application of compound reconfiguration is the combination of frequency agility and beam-scanning to provide improved spectral efficiencies. Compound reconfigurability is achieved by combining in the same structure different single-parameter reconfiguration techniques [ 18 ] [ 19 ] or by reshaping dynamically a pixel surface. [ 1 ] [ 20 ]
There are different types of reconfiguration techniques for antennas. Mainly they are electrical [ 4 ] (for example using RF-MEMS , PIN diodes , or varactors ), optical, physical (mainly mechanical), [ 7 ] [ 8 ] and using materials. For the reconfiguration techniques using materials, the materials could be solid, liquid crystal, liquids (dielectric liquid [ 21 ] or liquid metal). | https://en.wikipedia.org/wiki/Reconfigurable_antenna |
A reconfigurable manufacturing system ( RMS ) is a system invented in 1998 that is designed for the outset of rapid change in its structure, as well as its hardware and software components, in order to quickly adjust its production capacity and functionality within a part family in response to sudden market changes or intrinsic system change. [ 1 ] [ 2 ] A reconfigurable machine can have its features and parts machined. [ 3 ]
The RMS, as well as one of its components—the reconfigurable machine tool (RMT)—were invented in 1998 in the Engineering Research Center for Reconfigurable Manufacturing Systems (ERC/RMS) at the University of Michigan College of Engineering . [ 4 ] [ 5 ] [ 6 ] The term reconfigurability in manufacturing was likely coined by Kusiak and Lee. [ 7 ]
From 1996 to 2007, Yoram Koren received an NSF grant of $32.5 million to develop the RMS science base and its software and hardware tools. [ 8 ] RMS technology is based on an approach that consists of key elements, the compilation of which is called the RMS science base.
The system is composed of stages: 10, 20, 30, etc. Each stage consists of identical machines, such as CNC milling machines . The system produces one product. The manufactured product moves on the horizontal conveyor . Then Gantry-10 grips the product and brings it to one of CNC-10. When CNC-10 finishes the processing, Gantry-10 moves it back to the conveyor. The conveyor moves the product to Gantry-20, which grips the product and loads it on the RMT-20, and so on. Inspection machines are placed at several stages and at the end of the manufacturing system.
The product may move during its production in many production paths. In practice, there are small variations in the precision of identical machines, which create accumulated errors in the manufactured product; each path has its own "stream-of-variations" (a term coined by Y. Koren ). [ 9 ] [ 10 ]
Ideal reconfigurable manufacturing systems, according to professor Yoram Koren in 1995, possess six characteristics: modularity , integrability , customized flexibility, scalability, convertibility, and diagnosability. [ 5 ] [ 11 ] Characteristics for its components are: reconfigurable machines, controllers, and system control software. An RMS does not necessarily have all of the characteristics. [ 12 ] These principles are called Koren's RMS principles. Supposedly, the more of these principles applicable to a given manufacturing system, the more reconfigurable that system is. The RMS principles are:
The components of RMS are CNC machines , [ 13 ] reconfigurable tools, [ 5 ] [ 11 ] reconfigurable inspection machines, [ 14 ] and material transport systems (such as gantries and conveyors) that connect the machines to form the system. Different arrangements and configurations of these machines will affect the system's productivity. [ 15 ] A collection of mathematical tools, which are defined as the RMS science base, may be used to maximize system productivity with the smallest possible number of machines. | https://en.wikipedia.org/wiki/Reconfigurable_manufacturing_system |
Reconstruction of attosecond beating by interference of two-photon transitions , more commonly known as RABBITT or RABBIT for short, is a widely used technique for obtaining the relative phase and amplitude of attosecond pulses. This technique involves the interference of two-photon interband transitions in solids. It is especially suited for diagnostics on the temporal structure of XUV pulses. The reconstruction of attosecond beating by interference of two-photon transitions is a valuable tool for studying ultrafast processes in materials and can provide insight into the dynamics of electrons in solids. [ 1 ] [ 2 ]
RABBITT was invented by Pierre Agostini , Harm Geert Muller and colleagues in 2001. [ 3 ] | https://en.wikipedia.org/wiki/Reconstruction_of_attosecond_beating_by_interference_of_two-photon_transitions |
A record press or stamper is a machine for manufacturing vinyl records . It is essentially a hydraulic press fitted with thin nickel stampers which are negative impressions of a master disc. [ 1 ] Labels and a pre-heated vinyl patty (or biscuit ) are placed in a heated mold cavity. Two stampers are used, one for each of side of the disc. The record press closes under a pressure of about 150 tons. [ 2 ] The process of compression molding forces the hot vinyl to fill the grooves in the stampers, and take the form of the finished record.
In the mid-1960s, Emory Cook developed a system of record forming wherein the mold pressure was replaced by a vacuum . In this technique, the mold cavity was evacuated and vinyl was introduced in micro-particle form. The particles were then flash-fused instantaneously at a high temperature forming a coherent solid. Cook called this disc manufacturing technology microfusion . A small pressing plant in Hollywood also employed a similar system which they maintained fused the particles more evenly throughout the disc thickness calling their product polymax . Both claimed the resultant disc grooves exhibited less surface noise and greater resistance to deformation from stylus tip inertia than convention pressure molded vinyl discs. | https://en.wikipedia.org/wiki/Record_press |
In music production , the recording studio is often treated as a musical instrument when it plays a significant role in the composition of music . Sometimes called "playing the studio", the approach is typically embodied by artists or producers who favor the creative use of studio technology in record production, as opposed to simply documenting live performances in studio. [ 1 ] Techniques include the incorporation of non-musical sounds , overdubbing , tape edits, sound synthesis , audio signal processing , and combining segmented performances ( takes ) into a unified whole.
Composers have exploited the potential of multitrack recording from the time the technology was first introduced. Before the late 1940s, musical recordings were typically created with the idea of presenting a faithful rendition of a real-life performance. Following the advent of three-track tape recorders in the mid-1950s, recording spaces became more accustomed for in-studio composition. By the late 1960s, in-studio composition had become standard practice, and has remained as such into the 21st century.
Despite the widespread changes that have led to more compact recording set-ups, individual components such as digital audio workstations (DAW) are still colloquially referred to as "the studio". [ 2 ]
There is no single instance in which the studio suddenly became recognized as an instrument, and even at present [2018] it may not have wide recognition as such. Nevertheless, there is a historical precedent of the studio—broadly defined—consciously being used to perform music.
"Playing the studio" is equivalent to 'in-studio composition', meaning writing and production occur concurrently. [ 4 ] Definitions of the specific criterion of a "musical instrument" vary, [ 5 ] and it is unclear whether the "studio as instrument" concept extends to using multi-track recording simply to facilitate the basic music writing process. [ 6 ] According to academic Adam Bell, some proposed definitions may be consistent with music produced in a recording studio, but not with music that relies heavily on digital audio workstations (DAW). [ 5 ] Various music educators alluded to "using the studio as a musical instrument" in books published as early as the late 1960s. [ 7 ]
Rock historian Doyle Greene defines "studio as compositional tool" as a process in which music is produced around studio constructions rather than the more traditional method of capturing a live performance as is. [ 1 ] Techniques include the incorporation of non-musical sounds , overdubbing , tape edits, sound synthesis , audio signal processing , and combining segmented performances ( takes ) into a unified whole. [ 1 ] Despite the widespread changes that have led to more compact recording set-ups, individual components such as DAWs are still referred to as "the studio". [ 2 ]
Composers have exploited the potential of recording technology since it was first made available to them. [ 8 ] Before the late 1940s, musical recordings were typically created with the idea of presenting a faithful rendition of a real-life performance. [ 9 ] Writing in 1937, the American composer John Cage called for the development of "centers of experimental music" places where "the new materials, oscillators, turntables, generators, means for amplifying small sounds, film phonographs, etc." would allow composers to "work using twentieth-century means for making music." [ 10 ]
In the early 1950s, electronic equipment was expensive to own, and for most people, was only accessible through large organizations or institutions. However, virtually every young composer was interested in the potential of tape-based recording. [ 11 ] According to Brian Eno , "the move to tape was very important", because unlike gramophone records , tape was "malleable and mutable and cuttable and reversible in ways that discs aren't. It's very hard to do anything interesting with a disc". [ 9 ] In the mid 1950s, popular recording conventions changed profoundly with the advent of three-track tape recorders, [ 12 ] and by the early 1960s, it was common for producers, songwriters, and engineers to freely experiment with musical form , orchestration , unnatural reverb , and other sound effects. Some of the best known examples are Phil Spector 's Wall of Sound and Joe Meek 's use of homemade electronic sound effects for acts like the Tornados . [ 13 ]
In-studio composition became standard practice by the late 1960s and early 1970s, and remained so into the 2010s. During the 1970s, the "studio as instrument" concept shifted from the studio's recording space to the studio's control room, where electronic instruments could be plugged directly into the mixing console. [ 14 ] As of the 2010s, the "studio as instrument" idea remains ubiquitous in genres such as pop , hip-hop , and electronic music. [ 15 ]
Pioneers from the 1940s include Bill Putnam , Les Paul , and Tom Dowd , who each contributed to the development of common recording practices like reverb, tape delay , and overdubbing . Putnam was one of the first to recognize echo and reverb as elements to enhance a recording, rather than as natural byproducts of the recording space. He engineered the Harmonicats ' 1947 novelty song " Peg o' My Heart ", which was a significant chart hit and became the first popular recording to use artificial reverb for artistic effect. [ 15 ] Although Les Paul was not the first to use overdubs, he popularized the technique in the 1950s. [ 16 ]
Around the same time, French composers Pierre Schaeffer and Pierre Henry were developing musique concrete , a method of composition in which pieces of tape are rearranged and spliced together, and thus originated sampling . Meanwhile, in England, Daphne Oram experimented heavily with electronic instruments during her tenure as a balancing engineer for the BBC , however, her tape experiments were mostly unheard at the time. [ 15 ]
The BBC Radiophonic Workshop was one of the first to use the recording studio as a creative tool, often overlooked as was producing music for TV (and the prominent people were women) Daphne Oram, and Delia Derbyshire, who were early innovators. [ citation needed ]
English producer Joe Meek around the same time exploited the use of recording studios as instruments, and one of the first producers to assert an individual identity as an artist . [ 17 ] He began production work in 1955 at IBC Studios in London. One of Meek's signature techniques was to overload a signal with dynamic range compression , which was unorthodox at the time. He was antagonized by his employers for his "radical" techniques. Some of these methods, such as close-miking instruments, later became part of normal recording practice. [ 15 ] Music journalist Mark Beaumont writes that Meek "realised the studio-as-instrument philosophy years before The Beatles or The Beach Boys ". [ 18 ]
Discussing Jerry Leiber and Mike Stoller , Adam Bell describes the songwriting duo's productions for the Coasters as "an excellent example of their pioneering practices in the emerging field of production", citing an account from Stoller in which he recalls "cutting esses off words, sticking the tape back together so you didn't notice. And sometimes if the first refrain on a take was good and the second one lousy, we'd tape another recording of the first one and stick it in place of the second one." [ 19 ]
Phil Spector , sometimes regarded as Joe Meek's American counterpart, [ 20 ] is also considered "important as the first star producer of popular music and its first 'auteur' ... Spector changed pop music from a performing art ... to an art which could sometimes exist only in the recording studio". [ 21 ] His original production formula (dubbed the " Wall of Sound ") called for large ensembles (including some instruments not generally used for ensemble playing, such as electric and acoustic guitars ), with multiple instruments doubling and even tripling many of the parts to create a fuller, richer sound. [ 22 ] [ nb 1 ] It evolved from his mid-1950s work with Leiber and Stoller during the period in which they sought a fuller sound through excessive instrumentation. [ 24 ] [ nb 2 ] Spector's 1963 production of " Be My Baby ", according to Rolling Stone magazine, was a " Rosetta stone for studio pioneers such as the Beatles and Brian Wilson ". [ 25 ]
The Beatles' producer George Martin and the Beach Boys ' producer-songwriter Brian Wilson are generally credited with helping to popularize the idea of the studio as an instrument used for in-studio composition, and music producers after the mid 1960s increasingly drew from their work. [ 26 ] [ nb 3 ] Although Martin was nominally the Beatles' producer, from 1964 he ceded control to the band, allowing them to use the studio as a workshop for their ideas and later as a sound laboratory. [ 28 ] Musicologist Olivier Julien writes that the Beatles' "gradual integration of arranging and recording into one and the same process" began as early as 1963, but developed in earnest during the sessions for Rubber Soul (1965) and Revolver (1966) and "ultimately blossomed" during the sessions for Sgt. Pepper's Lonely Hearts Club Band (1967). [ 29 ] Wilson, who was mentored by Spector, [ 30 ] was another early auteur of popular music. [ 26 ] Authors Jim Cogan and William Clark credit him as the first rock producer to use the studio as a discrete instrument. [ 30 ]
According to author David Howard, Martin's work on the Beatles' " Tomorrow Never Knows ", from Revolver , and Spector's production of " River Deep – Mountain High " from the same year were the two recordings that ensured that the studio "was now its own instrument". [ 32 ] Citing composer and producer Virgil Moorefield 's book The Producer as Composer , author Jay Hodgson highlights Revolver as representing a "dramatic turning point" in recording history through its dedication to studio exploration over the "performability" of the songs, as this and subsequent Beatles albums reshaped listeners' preconceptions of a pop recording. [ 33 ] According to Julien, the follow-up LP Sgt. Pepper represents the "epitome of the transformation of the recording studio into a compositional tool", marking the moment when "popular music entered the era of phonographic composition." [ 34 ] Composer and musicologist Michael Hannan attributes the album's impact to Martin and his engineers, in response to the Beatles' demands, making increasingly creative use of studio equipment and originating new processes. [ 35 ]
Like Revolver , " Good Vibrations ", which Wilson produced for the Beach Boys in 1966, was a prime proponent in revolutionizing rock from live concert performances into studio productions that could only exist on record. [ 36 ] For the first time, Wilson limited himself to recording short interchangeable fragments (or "modules") rather than a complete song. Through the method of tape splicing , each fragment could then be assembled into a linear sequence – as Wilson explored on subsequent recordings from this period – allowing any number of larger structures and divergent moods to be produced at a later time. [ 37 ] [ nb 4 ] Musicologist Charlie Gillett called "Good Vibrations" "one of the first records to flaunt studio production as a quality in its own right, rather than as a means of presenting a performance", [ 38 ] while rock critic Gene Sculatti called it the "ultimate in-studio production trip", adding that its influence was apparent in songs such as " A Day in the Life " from Sgt. Pepper . [ 39 ]
Adam Bell credits Brian Eno with popularizing the concept of the studio as instrument, particularly that it "did not require previous experience, and in some ways, a lack of know-how might even be advantageous to creativity", and that "such an approach was typified" by Kraftwerk , whose members proclaimed "we play the studio". [ 14 ] He goes on to say:
While those of the ilk of Brian Wilson used the studio as an instrument by orchestrating everyone that worked within it, the turn to technology in the cases of Sly Stone , Stevie Wonder , Prince , and Brian Eno signify a conceptual shift in which an alternative approach that might make using the studio as an instrument cheaper, easier, more convenient, or more creative, was increasingly sought after. Compared to the 1960s, using the studio as an instrument became less about working the system as it were, and working the systems . [ 14 ]
Producer Conny Plank was cited as creating "a world in sound using the studio as an instrument" producing bands such as Can , Cluster , Neu! , Kraftwerk and Ultravox amongst many others, the studio was seen as an integral part of the music. [ 40 ] [ 41 ]
Jamaican producer Lee "Scratch" Perry was noted for his 70s reggae and dub productions, recorded at his Black Ark studio. [ 42 ] David Toop commented that "at its heights, Perry's genius has transformed the recording studio" into "virtual space, an imaginary chamber over which presided the electronic wizard, evangelist, gossip columnist and Dr. Frankenstein that he became." [ 43 ]
From the late 1970s onward, hip hop production has been strongly linked to the lineage and technique of earlier artists who used the studio as an instrument. Jazz critic Francis Davis identified early hip-hop DJs , including Afrika Bambaataa and Grandmaster Flash , as "grassroots successors to Phil Spector, Brian Wilson, and George Martin, the 1960s producers who pioneered the use of the recording studio as an instrument in its own right." [ 44 ]
Beginning in the 1980s, musicians associated with the genres dream pop and shoegazing made innovative use of effects pedals and recording techniques to create ethereal, "dreamy" musical atmospheres. [ 45 ] The English-Irish shoegazing band My Bloody Valentine , helmed by guitarist-producer Kevin Shields , are often celebrated for their studio albums Isn't Anything (1988) and Loveless (1991). Writing for The Sunday Times , Paul Lester said Shields is "widely accepted as shoegazing's genius", with "his astonishing wall of sound, use of the studio as instrument and dazzling reinvention of the guitar making him a sort of hydra-headed Spector- Hendrix -Eno figure". [ 46 ]
Chuck Eddy writes that, as the CD era emerged in the late 1980s, pop-metal was the first musical style to exploit contemporary recording studio techniques for "an aesthetic advantage", citing Def Leppard 's Hysteria (1987) as a pioneering example and Lita Ford 's Stiletto (1990) as a similar case, as both albums feature incidental high tech "whooshes and wobbles and giggles and boinks". [ 47 ] Similarly, Eddy cites Kix 's Blow My Fuse (1988) as an album whose sonics embody a futuristic "digital disco " sound, with "dub-doctored Who synths" and a 'studiofied' production. [ 48 ]
American psychedelic rock band The Flaming Lips earned comparisons by critics to Brian Wilson's work [ failed verification ] when discussing their albums Zaireeka (1997) and The Soft Bulletin (1999), which were the results of extensive studio experimentation. When asked what instrument does he play, frontman Wayne Coyne simply stated "the recording studio". [ 49 ] [ importance of example(s)? ] | https://en.wikipedia.org/wiki/Recording_studio_as_an_instrument |
In metallurgy , recovery is a process by which a metal or alloy 's deformed grains can reduce their stored energy by the removal or rearrangement of defects in their crystal structure . These defects, primarily dislocations , are introduced by plastic deformation of the material and act to increase the yield strength of a material. Since recovery reduces the dislocation density, the process is normally accompanied by a reduction in a material's strength and a simultaneous increase in the ductility . As a result, recovery may be considered beneficial or detrimental depending on the circumstances.
Recovery is related to the similar processes of recrystallization and grain growth , each of them being stages of annealing . Recovery competes with recrystallization, as both are driven by the stored energy, but is also thought to be a necessary prerequisite for the nucleation of recrystallized grains. It is so called because there is a recovery of the electrical conductivity due to a reduction in dislocations. This creates defect-free channels, giving electrons an increased mean free path . [ 1 ]
The physical processes that fall under the designations of recovery, recrystallization and grain growth are often difficult to distinguish in a precise manner. Doherty et al. (1998) stated:
"The authors have agreed that ... recovery can be defined as all annealing processes occurring in deformed materials that occur without the migration of a high-angle grain boundary"
Thus the process can be differentiated from recrystallization and grain growth as both feature extensive movement of high-angle grain boundaries.
If recovery occurs during deformation (a situation that is common in high-temperature processing) then it is referred to as 'dynamic' while recovery that occurs after processing is termed 'static'. The principal difference is that during dynamic recovery, stored energy continues to be introduced even as it is decreased by the recovery process - resulting in a form of dynamic equilibrium .
A heavily deformed metal contains a huge number of dislocations predominantly caught up in 'tangles' or 'forests'. Dislocation motion is relatively difficult in a metal with a low stacking fault energy and so the dislocation distribution after deformation is largely random. In contrast, metals with moderate to high stacking fault energy, e.g. aluminum, tend to form a cellular structure where the cell walls consist of rough tangles of dislocations. The interiors of the cells have a correspondingly reduced dislocation density.
Each dislocation is associated with a strain field which contributes some small but finite amount to the materials stored energy. When the temperature is increased - typically below one-third of the absolute melting point - dislocations become mobile and are able to glide , cross-slip and climb . If two dislocations of opposite sign meet then they effectively cancel out and their contribution to the stored energy is removed. When annihilation is complete then only excess dislocation of one kind will remain.
After annihilation any remaining dislocations can align themselves into ordered arrays where their individual contribution to the stored energy is reduced by the overlapping of their strain fields. The simplest case is that of an array of edge dislocations of identical Burger's vector. This idealized case can be produced by bending a single crystal that will deform on a single slip system (the original experiment performed by Cahn in 1949). The edge dislocations will rearrange themselves into tilt boundaries , a simple example of a low-angle grain boundary . Grain boundary theory predicts that an increase in boundary misorientation will increase the energy of the boundary but decrease the energy per dislocation. Thus, there is a driving force to produce fewer, more highly misoriented boundaries. The situation in highly deformed, polycrystalline materials is naturally more complex. Many dislocations of different Burger's vector can interact to form complex 2-D networks.
As mentioned above, the deformed structure is often a 3-D cellular structure with walls consisting of dislocation tangles. As recovery proceeds these cell walls will undergo a transition towards a genuine subgrain structure. This occurs through a gradual elimination of extraneous dislocations and the rearrangement of the remaining dislocations into low-angle grain boundaries.
Sub-grain formation is followed by subgrain coarsening where the average size increases while the number of subgrains decreases. This reduces the total area of grain boundary and hence the stored energy in the material. Subgrain coarsen shares many features with grain growth.
If the sub-structure can be approximated to an array of spherical subgrains of radius R and boundary energy γ s ; the stored energy is uniform; and the force on the boundary is evenly distributed, the driving pressure P is given by:
Since γ s is dependent on the boundary misorientation of the surrounding subgrains, the driving pressure generally does not remain constant throughout coarsening. | https://en.wikipedia.org/wiki/Recovery_(metallurgy) |
Recovery boiler is the part of kraft process of pulping where chemicals for white liquor are recovered and reformed from black liquor , which contains lignin from previously processed wood. The black liquor is burned, generating heat, which is usually used in the process of making electricity, much as in a conventional steam power plant . The invention of the recovery boiler by G.H. Tomlinson in the early 1930s was a milestone in the advancement of the kraft process. [ 1 ]
Recovery boilers are also used in the (less common) sulfite process of wood pulping; this article deals only with recovery boiler use in the kraft process.
Concentrated black liquor contains organic dissolved wood residue in addition to sodium sulfate from the cooking chemicals added at the digester. Combustion of the organic portion of chemicals produces heat. In the recovery boiler, heat is used to produce high pressure steam, which is used to generate electricity in a turbine. The turbine exhaust, low pressure steam is used for process heating.
Combustion of black liquor in the recovery boiler furnace needs to be controlled carefully. High concentration of sulfur requires optimum process conditions to avoid production of sulfur dioxide and reduced sulfur gas emissions. In addition to environmentally clean combustion, reduction of inorganic sulfur must be achieved in the char bed .
Several processes occur in the recovery boiler:
Some features of the original recovery boiler have remained unchanged to this day. It was the first recovery equipment type where all processes occurred in a single vessel. The drying, combustion and subsequent reactions of black liquor all occur inside a cooled furnace. This is the main idea in Tomlinson's work.
Secondly the combustion is aided by spraying the black liquor into small droplets. Controlling process by directing spray proved easy. Spraying was used in early rotary furnaces and with some success adapted to stationary furnace by H. K. Moore. Thirdly one can control the char bed by having primary air level at char bed surface and more levels above. Multiple-level air system was introduced by C. L. Wagner.
Recovery boilers also improved the smelt removal. It is removed directly from the furnace through smelt spouts into a dissolving tank. Some of the first recovery units employed the use of Cottrell's electrostatic precipitator for dust recovery.
Babcock & Wilcox was founded in 1867 and gained early fame with its water tube boilers . The company built and put into service the first black liquor recovery boiler in the world in 1929. [ 2 ] This was soon followed by a unit with completely water cooled furnace at Windsor Mills in 1934. After reverberatory and rotating furnaces the recovery boiler was on its way.
The second early pioneer, Combustion Engineering (now GE) based its recovery boiler design on the work of William M. Cary, who in 1926 designed three furnaces to operate with direct liquor spraying and on work by Adolph W. Waern and his recovery units.
Recovery boilers were soon licensed and produced in Scandinavia and Japan. These boilers were built by local manufacturers from drawings and with instructions of licensors. One of the early Scandinavian Tomlinson units employed an 8.0 m high furnace that had 2.8×4.1 m furnace bottom which expanded to 4.0×4.1 m at superheater entrance. [ 3 ]
This unit stopped production for every weekend. In the beginning economizers had to be water washed twice every day, but after installation of shot sootblowing in the late 1940s the economizers could be cleaned at the regular weekend stop.
The construction utilized was very successful. One of the early Scandinavian boilers 160 t/day at Korsnäs, operated still almost 50 years later. [ 4 ]
The use of kraft recovery boilers spread fast as functioning chemical recovery gave kraft pulping an economic edge over sulfite pulping. [ 5 ]
The first recovery boilers had horizontal evaporator surfaces, followed by superheaters and more evaporation surfaces. These boilers resembled the state-of-the-art boilers of some 30 years earlier. This trend has continued until today. Since a halt in the production line will cost a lot of money the adopted technology in recovery boilers tends to be conservative.
The first recovery boilers had severe problems with fouling . [ 6 ]
Tube spacing wide enough for normal operation of a coal-fired boiler had to be wider for recovery boilers. This gave satisfactory performance of about a week before a water wash. Mechanical sootblowers were also quickly adopted. To control chemical losses and lower the cost of purchased chemicals electrostatic precipitators were added. Lowering dust losses in flue gases has more than 60 years of practice.
One should also note square headers in the 1940 recovery boiler. The air levels in recovery boilers soon standardized to two: a primary air level at the char bed level and a secondary above the liquor guns.
In the first tens of years, the furnace lining was of refractory brick. The flow of smelt on the walls causes extensive replacement and soon designs that eliminated the use of bricks were developed.
To achieve solid operation and low emissions the recovery boiler air system needs to be properly designed. Air system development continues and has been continuing as long as recovery boilers have existed. [ 7 ] As soon as the target set for the air system has been met new targets are given. Currently the new air systems have achieved low NOx, but are still working on lowering fouling. Table 1 visualizes the development of air systems.
Table 1: Development of air systems. [ 7 ]
The first generation air system in the 1940s and 1950s consisted of a two level arrangement; primary air for maintaining the reduction zone and secondary air below the liquor guns for final oxidation. [ 8 ] The recovery boiler size was 100 – 300 TDS (tons of dry solids) per day. and black liquor concentration 45 – 55%. Frequently to sustain combustion auxiliary fuel needed to be fired. Primary air was 60 – 70% of total air with secondary the rest. In all levels openings were small and design velocities were 40 – 45 m/s. Both air levels were operated at 150 °C. Liquor gun or guns were oscillating. Main problems were high carryover , plugging and low reduction. But the function, combustion of black liquor, could be filled.
The second generation air system targeted high reduction. In 1954 CE moved their secondary air from about 1 m below the liquor guns to about 2 m above them. [ 8 ] The air ratios and temperatures remained the same, but to increase mixing 50 m/s secondary air velocities were used. CE changed their frontwall/backwall secondary to tangential firing at that time. In tangential air system the air nozzles are in the furnace corners. The preferred method is to create a swirl of almost the total furnace width. In large units the swirl caused left and right imbalances. This kind of air system with increased dry solids managed to increase lower furnace temperatures and achieve reasonable reduction. B&W had already adopted the three-level air feeding by then.
Third generation air system was the three level air. In Europe the use of three levels of air feeding with primary and secondary below the liquor guns started about 1980. At the same time stationary firing gained ground. Use of about 50% secondary seemed to give hot and stable lower furnace. [ 9 ] Higher black liquor solids 65 – 70% started to be in use. Hotter lower furnace and improved reduction were reported. With three level air and higher dry solids the sulfur emissions could be kept in place.
Fourth generation air systems are the multilevel air and the vertical air. As the feed of black liquor dry solids to the recovery boiler have increased, achieving low sulfur emissions is not anymore the target of the air system. Instead low NOx and low carryover are the new targets.
The three-level air system was a significant improvement, but better results were required. Use of CFD models offered a new insight of air system workings. The first to develop a new air system was Kvaerner (Tampella) with their 1990 multilevel secondary air in Kemi, Finland, which was later adapted to a string of large recovery boilers. [ 10 ] Kvaerner also patented the four level air system, where additional air level is added above the tertiary air level. This enables significant NOx reduction.
Vertical air mixing was invented by Erik Uppstu. [ 11 ] His idea is to turn traditional vertical mixing to horizontal mixing. Closely spaced jets will form a flat plane. In traditional boilers this plane has been formed by secondary air. By placing the planes to 2/3 or 3/4 arrangement improved mixing results. Vertical air has a potential to reduce NOx as staging air helps in decreasing emissions. [ 12 ] In vertical air mixing, primary air supply is arranged conventionally. Rest of the air ports are placed on interlacing 2/3 or 3/4 arrangement.
As fired black liquor is a mixture of organics, inorganics and water. Typically the amount of water is expressed as mass ratio of dried black liquor to unit of black liquor before drying. This ratio is called the black liquor dry solids.
If the black liquor dry solids is below 20% or water content in black liquor is above 80% the net heating value of black liquor is negative. This means that all heat from combustion of organics in black liquor is spent evaporating the water it contains. The higher the dry solids, the less water the black liquor contains and the hotter the adiabatic combustion temperature.
Black liquor dry solids have always been limited by the ability of available evaporation. [ 13 ] Virgin black liquor dry solids of recovery boilers is shown as a function of purchase year of that boiler.
When looking at the virgin black liquor dry solids we note that on average dry solids has increased. This is especially true for latest very large recovery boilers. Design dry solids for green field mills have been either 80 or 85% dry solids. 80% (or before that 75%) dry solids has been in use in Asia and South America. 85% (or before that 80%) has been in use in Scandinavia and Europe.
Development of recovery boiler main steam pressure and temperature was rapid at the beginning. By 1955, not even 20 years from birth of recovery boiler highest steam pressures were 10.0 MPa and 480 °C. The pressures and temperatures used then backed downward somewhat due to safety. [ 14 ] By 1980 there were about 700 recovery boilers in the world. [ 9 ]
Development of recovery boiler pressure, temperature and capacity.
One of the main hazards in operation of recovery boilers is the smelt-water explosion. This can happen if even a small amount of water is mixed with the solids in high temperature. Smelt-water explosion is purely a physical phenomenon. The smelt water explosion phenomena have been studied by Grace. [ 15 ] By 1980 there were about 700 recovery boilers in the world. [ 9 ] The liquid - liquid type explosion mechanism has been established as one of the main causes of recovery boiler explosions.
In the smelt water explosion even a few liters of water, when mixed with molten smelt can violently turn to steam in few tenths of a second. Char bed and water can coexist as steam blanketing reduces heat transfer. Some trigger event destroys the balance and water is evaporated quickly through direct contact with smelt. This sudden evaporation causes increase of volume and a pressure wave of some 10 000 – 100 000 Pa. The force is usually sufficient to cause all furnace walls to bend out of shape. Safety of equipment and personnel requires an immediate shutdown of the recovery boiler if there is a possibility that water has entered the furnace. All recovery boilers have to be equipped with special automatic shutdown sequence.
The other type of explosions is the combustible gases explosion. For this to happen the fuel and the air have to be mixed before the ignition. Typical conditions are either a blackout (loss of flame) without purge of furnace or continuous operation in a substoichiometric state. To detect blackout flame monitoring devices are installed, with subsequent interlocked purge and startup. Combustible gas explosions are connected with oil/gas firing in the boiler. As also continuous O 2 monitoring is practiced in virtually every boiler the noncombustible gas explosions have become very rare.
The modern recovery boiler is of a single drum design, with vertical steam generating bank and wide spaced superheaters. This design was first proposed by Colin MacCallum in 1973 in a proposal by Götaverken (now Metso Power inc.) for a large recovery boiler having a capacity of 4,000,000 lb of black liquor solids per day for a boiler in Skutskär, Sweden, but this design was rejected as being too advanced at that time by the prospective owner. MacCallum presented the design at BLRBAC and in a paper "The Radiant Recovery Boiler" printed in Tappi magazine in December 1980. The first boiler of this single-drum design was sold by Götaverken at Leaf River in Mississippi in 1984. The construction of the vertical steam generating bank is similar to the vertical economizer. Vertical boiler bank is easy to keep clean. The spacing between superheater panels increased and leveled off at over 300 but under 400 mm. Wide spacing in superheaters helps to minimize fouling. This arrangement, in combination with sweetwater attemperators, ensures maximum protection against corrosion. There have been numerous improvements in recovery boiler materials to limit corrosion. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
The effect of increasing dry solids concentration has had a significant effect on the main operating variables. The steam flow increases with increasing black liquor dry solids content. Increasing closure of the pulp mill means that less heat per unit of black liquor dry solids will be available in the furnace. The flue gas heat loss will decrease as the flue gas flow diminishes. Increasing black liquor dry solids is especially helpful since the recovery boiler capacity is often limited by the flue gas flow.
A modern recovery boiler consists of heat transfer surfaces made of steel tube; furnace-1, superheaters-2, boiler generating bank-3 and economizers-4. The steam drum-5 design is of single-drum type. The air and black liquor are introduced through primary and secondary air ports-6, liquor guns-7 and tertiary air ports-8. The combustion residue, smelt exits through smelt spouts-9 to the dissolving tank-10.
The nominal furnace loading has increased during the last ten years and will continue to increase. [ 20 ] Changes in air design have increased furnace temperatures. [ 21 ] [ 22 ] [ 23 ] [ 24 ] This has enabled a significant increase in hearth solids loading (HSL) with only a modest design increase in hearth heat release rate (HHRR). The average flue gas flow decreases as less water vapor is present. So the vertical flue gas velocities can be reduced even with increasing temperatures in lower furnace.
The most marked change has been the adoption of single drum construction. This change has been partly affected by the more reliable water quality control. The advantages of a single drum boiler compared to a bi drum are the improved safety and availability. Single drum boilers can be built to higher pressures and bigger capacities. Savings can be achieved with decreased erection time. There is less tube joints in the single drum construction so drums with improved startup curves can be built.
The construction of the vertical steam generating bank is similar to the vertical economizer, which based on experience is very easy to keep clean. [ 25 ] Vertical flue gas flow path improves the cleanability with high dust loading. [ 26 ] To minimize the risk for plugging and maximize the efficiency of cleaning both the generating bank and the economizers are arranged on generous side spacing. Plugging of a two drum boiler bank is often caused by the tight spacing between the tubes.
The spacing between superheater panels has increased. All superheaters are now wide spaced to minimize fouling. This arrangement, in combination with sweetwater attemperators, ensures maximum protection against corrosion. With wide spacing plugging of the superheaters becomes less likely, the deposit cleaning is easier and the sootblowing steam consumption is lower. Increased number of superheaters facilitates the control of superheater outlet steam temperature especially during start ups.
The lower loops of hottest superheaters can be made of austenitic material, with better corrosion resistance. The steam velocity in the hottest superheater tubes is high, decreasing the tube surface temperature. Low tube surface temperatures are essential to prevent superheater corrosion. A high steam side pressure loss over the hot superheaters ensures uniform steam flow in tube elements.
Recovery boilers have been the preferred mode of kraft mill chemical recovery since the 1930s and the process has been improved considerably since the first generation. There have been attempts to replace the Tomlinson recovery boiler with recovery systems yielding higher efficiency. The most promising candidate appears to be gasification, [ 27 ] [ 28 ] where Chemrec's technology for entrained flow gasification of black liquor could prove to be a strong contender. [ 29 ]
Even if new technology is able to compete with traditional recovery boiler technology the transition will most likely be gradual. First, manufacturers of recovery boilers such as Metso , Andritz and Mitsubishi , can be expected to continue development of their products. Second, Tomlinson recovery boilers have a long life span, often around 40 years, and will probably not be replaced until the end of their economic lifetime, and may in the meantime be upgraded at intervals of 10 – 15 years.
[ 1 ] | https://en.wikipedia.org/wiki/Recovery_boiler |
The recovery effect is a phenomenon observed in battery usage where the available energy is less than the difference between energy charged and energy consumed. Intuitively, this is because the energy has been consumed from the edge of the battery and the charge has not yet diffused evenly around the battery. [ 1 ]
When power is extracted continuously voltage decreases in a smooth curve, but the recovery effect can result in the voltage partially increasing if the current is interrupted. [ 2 ]
The KiBaM battery model [ 3 ] describes the recovery effect for lead-acid batteries and is also a good approximation to the observed effects in Li-ion batteries . [ 1 ] [ 4 ] In some batteries, the gains from the recovery life can extend battery life by up to 45% by alternating discharging and inactive periods rather than constantly discharging. [ 5 ] The size of the recovery effect depends on the battery load, recovery time and depth of discharge. [ 6 ]
Even though the recovery effect phenomenon is prominent in the lead acid battery chemistry, its existence in alkaline , Ni-MH and Li-Ion batteries is still questionable. For instance, a systematic experimental case study [ 7 ] shows that an intermittent discharge current in case of alkaline, Ni-MH and Li-ion batteries results in a decreased usable energy output compared to a continuous discharge current of the same average value. This is primarily due to the increased overpotential experienced due to the high peak currents of the intermittent discharge over the continuous discharge current of same average value. | https://en.wikipedia.org/wiki/Recovery_effect |
Recreation ecology is the scientific study of environmental impacts resulting from recreational activity in protected natural areas. This field of study includes research and monitoring assessments of biophysical changes, analyses to identify causal and influential factors or support carrying capacity planning and management, and investigations of the efficacy of educational, regulatory, and site management actions designed to minimize recreation impacts. These ecological understandings of environmental impacts of outdoor recreation is critical to the management of recreation, ecotourism and visitation to natural spaces. [ 1 ] Recreation ecology research has looked at the ecological impacts of hiking, camping and other outdoor recreation activities where the use and visitation is concentrated. [ 2 ] As outdoor recreation shows increasing participation globally, questions and concerns are raised to which these can be managed sustainably with minimal impact to the environment. [ 2 ]
While scientific studies of human trampling can be traced back to the late 1920s, a substantial body of recreation ecology literature did not accumulate until the 1970s when visitation to the outdoors soared, threatening the ecology of natural and semi-natural areas. Since the 1970s and 1980s, this discipline has slowly accumulated momentum, adding new disciplinarians each year. Most of this field's work comes from Europe, although North American studies are quickly growing. Some prominent United States undergraduate and graduate programs include Oregon State University, Utah State University, the University of Illinois-Urbana Champaign, and Fort Lewis College, situated in Colorado. Other universities have begun developing programs, too, with hopes of sustainably transforming the way people interact with their natural and recreational resources. The Global South has received far less attention, although notably, Rwandan communities invested in sustainable modes of recreation and tourism relating to mountain gorilla habitat quality have considered the suite of environmental effects.
Recreation ecology as a field of study more officially began in the early 1960s [ 2 ] and was addressed in depth by J. Alan Wagar in his work titled The Carrying Capacity of Wild Lands For Recreation [ 3 ] , published in 1964 in the Society of American Foresters . In this publication, Wagar poses the question: do wild lands have carrying capacities for recreation use? Wagar addresses this question in terms of: (1) the impacts of outdoor recreation on people (2) the impacts of people in these outdoor spaces and (3) management procedures to address issues of overcrowding in wild lands for recreation. [ 3 ]
In the past few decades, more than 1000 articles on recreation ecology have been published. [ 2 ] As it is projected that the amount of time spent and the numbers of participants in winter, water-based and developed land activities will grow faster than the population, [ 4 ] there is a growing importance and need for recreation ecology.
Resource elements examined include soil , vegetation , water , and more recently, wildlife and microbes , with the majority of investigations conducted on trails, recreation sites, and campsites. Use-impact relationships, environmental resistance and resilience, management effectiveness, monitoring techniques, and carrying capacity are some of the major themes in recreation ecology. The impact of trampling from foot, bike, horse, or any other means of traffic in natural spaces is the most common and systematically researched topic in the field of recreation ecology. [ 2 ] Additional ecological impacts often studied in the field of recreation ecology include:
The impact of trampling from foot, bike, horse, or any other means of traffic in natural spaces is the most common and systematically researched topic in the field of recreation ecology. [ 2 ] Trampling of vegetation is studied often in terms of soil loss, plant loss, and erosion. Longer-term studies reveal how chronic trampling disturbances engage successional processes, ultimately engaging plant community shifts. [ 6 ]
Many recreational activities on aquatic systems have been examined such as power boating, water skiing, in-stream walking, and swimming. [ 2 ] Even more pronounced than boating, skiing, walking, and swimming effects include the impacts of angling on ponds, lakes, and streams. Fishing pressures occurring on recreational-sized scales can exert an important influence on the population sizes, community interactions, and behavioral variation associated with fish and non-fish aquatic organisms. Scholars also take to brackish and marine ecosystems, like estuaries and coral reefs to assess how SCUBA diving, snorkeling, surfing, and boating influence local ecosystem qualities. An important case stems from the Olympic National Marine Sanctuary, located off the Pacific Washington (US) coast, which uncovers how consistent marine fishing limits the growth and development of many nekton species. Ultimately, recreational overfishing called for markedly stronger regulations. [ 7 ]
These activities can cause physical disturbances to aquatic habitats through sound and movement, as well as subject these systems to an influx of nutrients, introduction of pathogens, and sedimentation. [ 2 ] Although this section barely scratches the surface, water-based recreation throughout marine and freshwater systems effectively diffuses or spreads non-native and often invasive species into new water bodies. This impact often leads to insidious events, which might not outwardly manifest for years, although when it does, many system components will have likely been severely damaged.
Outdoor recreation has many impacts on wildlife such as wildlife disturbances and habitat destruction. Hiking and camping may affect wildlife habitats through trampling and destruction of wildlife habitats. [ 8 ] Additionally hiking and camping can result in noise disturbances for wildlife, as well as produce negative impacts through discarded food and trash. [ 8 ] Poor trash management in protected natural areas with high levels of tourism can cause large rubbish piles, leading to wildlife habituation. When wildlife species become dependent on trash, they remain close to humans, raising concerns on the matter of human-animal conflicts.
Studying the intensity and extent of these factors can measure the intensity of impacts of outdoor recreation on the environment including the amount of use, type and behavior of use, timing of use, and type and condition of the environment. [ 2 ]
Study results have been applied to inform site and visitor management decisions and to provide scientific input to management planning frameworks such as:
Recreation Ecology Research publications have been disproportionately focused on North American field sites, and global publications are dominated by Anglophone authors, resulting in these publications being limited to English Scientific journals. [ 1 ] Second and third to North America, Europe and Australia have received attention and have had studies conducted on recreation ecology. [ 2 ]
Recent growth of ecotourism has prompted a new batch of recreation ecology studies focusing on developing countries where ecotourism is aggressively promoted. There is an increasing concern that ecotourism is not inherently sustainable and, if unchecked, would generate substantial impacts to ecotourism destinations which are often fragile ecosystems .
Recreation ecology and ecotourism are connected through the Tourism carrying capacity and Biophysical carrying capacity . Understanding the dynamics of nature and the resiliency of an ecosystem can allow for the estimated maximum number of visitors can come to a natural space before starting to see negative impacts. | https://en.wikipedia.org/wiki/Recreation_ecology |
Recreational mathematics is mathematics carried out for recreation (entertainment) rather than as a strictly research-and-application-based professional activity or as a part of a student's formal education. Although it is not necessarily limited to being an endeavor for amateurs , many topics in this field require no knowledge of advanced mathematics. Recreational mathematics involves mathematical puzzles and games , often appealing to children and untrained adults and inspiring their further study of the subject. [ 1 ]
The Mathematical Association of America (MAA) includes recreational mathematics as one of its seventeen Special Interest Groups , commenting:
Recreational mathematics is not easily defined because it is more than mathematics done as a diversion or playing games that involve mathematics. Recreational mathematics is inspired by deep ideas that are hidden in puzzles, games, and other forms of play. The aim of the SIGMAA on Recreational Mathematics (SIGMAA-Rec) is to bring together enthusiasts and researchers in the myriad of topics that fall under recreational math. We will share results and ideas from our work, show that real, deep mathematics is there awaiting those who look, and welcome those who wish to become involved in this branch of mathematics. [ 2 ]
Mathematical competitions (such as those sponsored by mathematical associations ) are also categorized under recreational mathematics.
Some of the more well-known topics in recreational mathematics are Rubik's Cubes , magic squares , fractals , logic puzzles and mathematical chess problems , but this area of mathematics includes the aesthetics and culture of mathematics, peculiar or amusing stories and coincidences about mathematics , and the personal lives of mathematicians .
Mathematical games are multiplayer games whose rules, strategies, and outcomes can be studied and explained using mathematics . The players of the game may not need to use explicit mathematics in order to play mathematical games. For example, Mancala is studied in the mathematical field of combinatorial game theory , but no mathematics is necessary in order to play it.
Mathematical puzzles require mathematics in order to solve them. They have specific rules, as do multiplayer games , but mathematical puzzles do not usually involve competition between two or more players. Instead, in order to solve such a puzzle , the solver must find a solution that satisfies the given conditions.
Logic puzzles and classical ciphers are common examples of mathematical puzzles. Cellular automata and fractals are also considered mathematical puzzles, even though the solver only interacts with them by providing a set of initial conditions.
As they often include or require game-like features or thinking, mathematical puzzles are sometimes also called mathematical games.
Magic tricks based on mathematical principles can produce self-working but surprising effects. For instance, a mathemagician might use the combinatorial properties of a deck of playing cards to guess a volunteer's selected card, or Hamming codes to identify whether a volunteer is lying. [ 3 ]
Other curiosities and pastimes of non-trivial mathematical interest include:
There are many blogs and audio or video series devoted to recreational mathematics. Among the notable are the following:
Prominent practitioners and advocates of recreational mathematics have included professional and amateur mathematicians : | https://en.wikipedia.org/wiki/Recreational_mathematics |
When discussing population dynamics , behavioral ecology , and cell biology , recruitment refers to several different biological processes. In population dynamics, recruitment is the process by which new individuals are added to a population, whether by birth and maturation or by immigration. [ 1 ] When discussing behavioral ecology and animal communication , recruitment is communication that is intended to add members of a group to specific tasks. [ 2 ] Finally, when discussing cell biology, recruitment is the process by which cells are selected for certain tasks. [ 3 ]
In population dynamics and community ecology, recruitment is the process by which individuals are added to a population. [ 1 ] Successful recruitment is contingent on an individual surviving and integrating within the population; in some studies, individuals are only considered to have been recruited into a population once they've reached a certain size or life stage. [ 4 ] [ 5 ] [ 6 ] Recruitment can be hard to assess due to the multitude of factors that affect it, such as predation, birth, and dispersal rates and environmental factors like temperature, precipitation, and natural disturbances. [ 1 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Recruitment rates in turn affect population size and demographics. [ 1 ] [ 8 ] High recruitment may increase a species' current and future abundance within a system, whereas low recruitment can lead to reduced current and future abundance. [ 10 ]
Recruitment can be an important factor in predicting future population growth potential. For this reason, and due to their economic importance, recruitment has commonly been studied in fishery systems. [ 11 ] [ 12 ] While experimental work has been done in aquatic systems, dozens of papers have been published in the last few decades to model recruitment in both marine and freshwater aquatic environments. [ 13 ]
Experimental studies on the effects of recruitment are numerous in forest and annual plant systems. [ 1 ] [ 4 ] [ 5 ]
In behavioral ecology and studies of animal communication, recruitment is the process by which individuals in a social group direct other individuals to do certain tasks. [ 2 ] This is often achieved through the use of recruitment pheromones that direct anywhere from one to several hundred individuals to important resources, like food or nesting sites. [ 2 ] Recruitment is practiced in a wide variety of eusocial taxa, most notably in hymenoptera (the ants, bees, and wasps) and termites but also in social caterpillars, beetles, and even a species of naked mole rats ( Heterocephalus glaber ). [ 2 ] | https://en.wikipedia.org/wiki/Recruitment_(biology) |
Recrystallization is a broad class of chemical purification techniques characterized by the dissolution of an impure sample in a solvent or solvent mixture , followed by some change in conditions that encourages the formation of pure isolate as solid crystals . [ 1 ] Recrystallization as a purification technique is driven by spontaneous processes of self-assembly that leverage the highly ordered (i.e. low- entropy ) and periodic characteristics of a crystal's molecular structure to produce purification.
The driving force of this purification emerges from the difference in molecular interactions between the isolate and the impurities: if a molecule of the desired isolate interacts with any isolate crystal present, it is likely the molecule deposits on the crystal's ordered surface and contributes to the crystal's growth; if a molecule of the impurity interacts with any isolate crystal present, it is unlikely to deposit on the crystal's ordered surface, and thus stays dissolved in the solvent. Initial crystals of isolate form by processes of stochastic nucleation and grow to macroscopic sizes when isolate molecules in solution deposit on them.
The simplest example of recrystallization is by temperature manipulation of a solution where the isolate compound has an endothermic dissolution (Δ H > 0) and a solubility product K sp that increases with temperature. A saturated solution of the impure sample (usually in a disordered state of matter, such as a solid powder or a viscous liquid ) is prepared near or at the boiling point of the solvent, and then the solution is slowly cooled to form a supersaturated solution where crystal nucleation (and thus formation) is imminent. [ 2 ]
The importance of crystallized compounds is so great that considerable effort and many reports describe methods for crystallization. Among the more popular methods are: [ 3 ]
In a simple case, solution of a solid is cooled to below the stage of saturation. In some cases, the solution is prepared with a hot solvent. In some cases, a mixed solvent is employed, for example aqueous ethanol . [ 4 ] Some of the solute will crystallize upon cooling. Ideally the precipitate will be absent some or most of impurities, which are more soluble in the solvent. [ 5 ]
Two solvent recrystallization relies on the product being far more soluble in one solvent than a second solvent, which is called the antisolvent. The solvent and antisolvent must be miscible. The volume ratio between the solvent and antisolvent is important as well as the concentration of the sample. [ 6 ] [ 7 ] The antisolvent is added to the solution of the solute until incipient precipitation of the solid. The solution is then cooled or simply allowed stand to further induce further crystallization. In one variation of this method, the solution is layered with the antisolvent.
Recrystallized products are often subject to X-ray crystallography for purity assessment. [ 8 ] The technique requires crystallized products to be singular, and absent of clumps. [ 8 ] Several approaches to this phenomenon are listed below. | https://en.wikipedia.org/wiki/Recrystallization_(chemistry) |
In geology , solid-state recrystallization is a metamorphic process that occurs under high temperatures and pressures where atoms of minerals are reorganized by diffusion and/or dislocation glide. During this process, the physical structure of the minerals is altered while the composition remains unchanged. [ 1 ] [ 2 ] This is in contrast to metasomatism , which is the chemical alteration of a rock by hydrothermal and other fluids.
Solid-state recrystallization can be illustrated by observing how snow recrystallizes to ice. When snow is subjected to varying temperatures and pressures, individual snowflakes undergo a physical transformation but their composition remains the same. Limestone is a sedimentary rock that undergoes metamorphic recrystallization to form marble , and clays can recrystallize to muscovite mica .
This article related to petrology is a stub . You can help Wikipedia by expanding it .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Recrystallization_(geology) |
In materials science , recrystallization is a process by which deformed grains are replaced by a new set of defect -free grains that nucleate and grow until the original grains have been entirely consumed. Recrystallization is usually accompanied by a reduction in the strength and hardness of a material and a simultaneous increase in the ductility . Thus, the process may be introduced as a deliberate step in metals processing or may be an undesirable byproduct of another processing step. The most important industrial uses are softening of metals previously hardened or rendered brittle by cold work , and control of the grain structure in the final product. Recrystallization temperature is typically 0.3–0.4 times the melting point for pure metals and 0.5 times for alloys.
Recrystallization is defined as the process in which grains of a crystal structure come in a new structure or new crystal shape.
A precise definition of recrystallization is difficult to state as the process is strongly related to several other processes, most notably recovery and grain growth . In some cases it is difficult to precisely define the point at which one process begins and another ends. Doherty et al. defined recrystallization as:
"... the formation of a new grain structure in a deformed material by the formation and migration of high angle grain boundaries driven by the stored energy of deformation. High angle boundaries are those with greater than a 10-15° misorientation" [ 1 ]
Thus the process can be differentiated from recovery (where high angle grain boundaries do not migrate) and grain growth (where the driving force is only due to the reduction in boundary area).
Recrystallization may occur during or after deformation (during cooling or subsequent heat treatment, for example). The former is termed dynamic while the latter is termed static . In addition, recrystallization may occur in a discontinuous manner, where distinct new grains form and grow, or a continuous manner, where the microstructure gradually evolves into a recrystallized microstructure. The different mechanisms by which recrystallization and recovery occur are complex and in many cases remain controversial. The following description is primarily applicable to static discontinuous recrystallization, which is the most classical variety and probably the most understood. Additional mechanisms include ( geometric ) dynamic recrystallization and strain induced boundary migration .
Secondary recrystallization occurs when a certain very small number of {110}<001> (Goss) grains grow selectively, about one in 106 primary grains, at the expense of many other primary recrystallized grains. This results in abnormal grain growth , which may be beneficial or detrimental for product material properties. The mechanism of secondary recrystallization is a small and uniform primary grain size, achieved through the inhibition of normal grain growth by fine precipitates called inhibitors. [ 2 ] Goss grains are named in honor of Norman P. Goss , the inventor of grain-oriented electrical steel circa 1934.
There are several, largely empirical laws of recrystallization:
During plastic deformation the work performed is the integral of the stress and strain in the plastic deformation regime. Although the majority of this work is converted to heat, some fraction (~1–5%) is retained in the material as defects—particularly dislocations. The rearrangement or elimination of these dislocations will reduce the internal energy of the system and so there is a thermodynamic driving force for such processes. At moderate to high temperatures, particularly in materials with a high stacking fault energy such as aluminium and nickel, recovery occurs readily and free dislocations will readily rearrange themselves into subgrains surrounded by low-angle grain boundaries.
The driving force is the difference in energy between the deformed and recrystallized state Δ E which can be determined by the dislocation density or the subgrain size and boundary energy (Doherty, 2005):
where ρ is the dislocation density, G is the shear modulus, b is the Burgers vector of the dislocations, γ s is the subgrain boundary energy and d s is the subgrain size.
Historically it was assumed that the nucleation rate of new recrystallized grains would be determined by the thermal fluctuation model successfully used for solidification and precipitation phenomena. In this theory it is assumed that as a result of the natural movement of atoms (which increases with temperature) small nuclei would spontaneously arise in the matrix. The formation of these nuclei would be associated with an energy requirement due to the formation of a new interface and an energy liberation due to the formation of a new volume of lower energy material. If the nuclei were larger than some critical radius then it would be thermodynamically stable and could start to grow.
The main problem with this theory is that the stored energy due to dislocations is very low (0.1–1 J m −3 ) while the energy of a grain boundary is quite high (~0.5 J m −3 ). Calculations based on these values found that the observed nucleation rate was greater than the calculated one by some impossibly large factor (~10 50 ).
As a result, the alternate theory proposed by Cahn in 1949 is now universally accepted. The recrystallized grains do not nucleate in the classical fashion but rather grow from pre-existing sub-grains and cells. The 'incubation time' is then a period of recovery where sub-grains with low-angle boundaries (<1–2°) begin to accumulate dislocations and become increasingly misoriented with respect to their neighbors. The increase in misorientation increases the mobility of the boundary and so the rate of growth of the sub-grain increases. If one sub-grain in a local area happens to have an advantage over its neighbors (such as locally high dislocation densities, a greater size or favorable orientation) then this sub-grain will be able to grow more rapidly than its competitors. As it grows its boundary becomes increasingly misoriented with respect to the surrounding material until it can be recognized as an entirely new strain-free grain.
Recrystallization kinetics are commonly observed to follow the profile shown. There is an initial 'nucleation period' t 0 where the nuclei form, and then begin to grow at a constant rate consuming the deformed matrix. Although the process does not strictly follow classical nucleation theory it is often found that such mathematical descriptions provide at least a close approximation. For an array of spherical grains the mean radius R at a time t is (Humphreys and Hatherly 2004):
where t 0 is the nucleation time and G is the growth rate dR/dt. If N nuclei form in the time increment dt and the grains are assumed to be spherical then the volume fraction will be:
This equation is valid in the early stages of recrystallization when f<<1 and the growing grains are not impinging on each other. Once the grains come into contact the rate of growth slows and is related to the fraction of untransformed material (1-f) by the Johnson-Mehl equation:
While this equation provides a better description of the process it still assumes that the grains are spherical, the nucleation and growth rates are constant, the nuclei are randomly distributed and the nucleation time t 0 is small. In practice few of these are actually valid and alternate models need to be used.
It is generally acknowledged that any useful model must not only account for the initial condition of the material but also the constantly changing relationship between the growing grains, the deformed matrix and any second phases or other microstructural factors. The situation is further complicated in dynamic systems where deformation and recrystallization occur simultaneously. As a result, it has generally proven impossible to produce an accurate predictive model for industrial processes without resorting to extensive empirical testing. Since this may require the use of industrial equipment that has not actually been built there are clear difficulties with this approach.
The annealing temperature has a dramatic influence on the rate of recrystallization which is reflected in the above equations. However, for a given temperature there are several additional factors that will influence the rate.
The rate of recrystallization is heavily influenced by the amount of deformation and, to a lesser extent, the manner in which it is applied. Heavily deformed materials will recrystallize more rapidly than those deformed to a lesser extent. Indeed, below a certain deformation recrystallization may never occur. Deformation at higher temperatures will allow concurrent recovery and so such materials will recrystallize more slowly than those deformed at room temperature e.g. contrast hot and cold rolling . In certain cases deformation may be unusually homogeneous or occur only on specific crystallographic planes . The absence of orientation gradients and other heterogeneities may prevent the formation of viable nuclei. Experiments in the 1970s found that molybdenum deformed to a true strain of 0.3, recrystallized most rapidly when tensioned and at decreasing rates for wire drawing , rolling and compression (Barto & Ebert 1971).
The orientation of a grain and how the orientation changes during deformation influence the accumulation of stored energy and hence the rate of recrystallization. The mobility of the grain boundaries is influenced by their orientation and so some crystallographic textures will result in faster growth than others.
Solute atoms, both deliberate additions and impurities, have a profound influence on the recrystallization kinetics. Even minor concentrations may have a substantial influence e.g. 0.004% Fe increases the recrystallization temperature by around 100 °C (Humphreys and Hatherly 2004). It is currently unknown whether this effect is primarily due to the retardation of nucleation or the reduction in the mobility of grain boundaries i.e. growth.
Many alloys of industrial significance have some volume fraction of second phase particles, either as a result of impurities or from deliberate alloying additions. Depending on their size and distribution such particles may act to either encourage or retard recrystallization.
Recrystallization is prevented or significantly slowed by a dispersion of small, closely spaced particles due to Zener pinning on both low- and high-angle grain boundaries. This pressure directly opposes the driving force arising from the dislocation density and will influence both the nucleation and growth kinetics. The effect can be rationalized with respect to the particle dispersion level F v / r {\displaystyle F_{v}/r} where F v {\displaystyle F_{v}} is the volume fraction of the second phase and r is the radius. At low F v / r {\displaystyle F_{v}/r} the grain size is determined by the number of nuclei, and so initially may be very small. However the grains are unstable with respect to grain growth and so will grow during annealing until the particles exert sufficient pinning pressure to halt them. At moderate F v / r {\displaystyle F_{v}/r} the grain size is still determined by the number of nuclei but now the grains are stable with respect to normal growth (while abnormal growth is still possible). At high F v / r {\displaystyle F_{v}/r} the unrecrystallized deformed structure is stable and recrystallization is suppressed.
The deformation fields around large (over 1 μm) non-deformable particles are characterised by high dislocation densities and large orientation gradients and so are ideal sites for the development of recrystallization nuclei. This phenomenon, called particle stimulated nucleation (PSN), is notable as it provides one of the few ways to control recrystallization by controlling the particle distribution.
The size and misorientation of the deformed zone is related to the particle size and so there is a minimum particle size required to initiate nucleation. Increasing the extent of deformation will reduce the minimum particle size, leading to a PSN regime in size-deformation space.
If the efficiency of PSN is one (i.e. each particle stimulates one nuclei), then the final grain size will be simply determined by the number of particles. Occasionally the efficiency can be greater than one if multiple nuclei form at each particle but this is uncommon. The efficiency will be less than one if the particles are close to the critical size and large fractions of small particles will actually prevent recrystallization rather than initiating it (see above).
The recrystallization behavior of materials containing a wide distribution of particle sizes can be difficult to predict. This is compounded in alloys where the particles are thermally-unstable and may grow or dissolve with time. In various systems, abnormal grain growth may occur giving rise to unusually large crystallites growing at the expense of smaller ones. The situation is more simple in bimodal alloys which have two distinct particle populations. An example is Al-Si alloys where it has been shown that even in the presence of very large (<5 μm) particles the recrystallization behavior is dominated by the small particles (Chan & Humphreys 1984). In such cases the resulting microstructure tends to resemble one from an alloy with only small particles.
The recrystallization temperature is temperature at which recrystallization can occur for a given material and processing conditions. This is not a set temperature and is dependent upon factors including the following: [ 3 ] | https://en.wikipedia.org/wiki/Recrystallization_(metallurgy) |
The rectangulus was an astronomical instrument made by Richard of Wallingford around 1326. Dissatisfied with the limitations of existing astrolabes , Richard developed the rectangulus as an instrument for spherical trigonometry and to measure the angles between planets and other astronomical bodies. [ 1 ] [ 2 ] This was one of a number of instruments he created, including the Albion , a form of equatorium , and a famously complicated and expensive horologium ( astronomical clock ).
His Tractus Rectanguli , describing the rectangulus, was an influential text in medieval astronomy and at least thirty copies were known to survive. [ 1 ] [ 2 ] His Quadripartitum was the first text on spherical trigonometry to be published in Western Europe. [ 3 ]
The rectangulus was a form of skeleton torquetum . [ 4 ] This was a series of nested angular scales, so that measurements in azimuth and elevation could be made directly in polar coordinates, relative to the ecliptic . Conversion from these coordinates though was difficult, involving what was the leading mathematics of the day. The rectangulus was an analogue computing device to simplify this: instead of measuring in angular measurements it could resolve the angles to Cartesian components directly. This then simplified the further calculations.
The rectangulus was constructed as a brass pillar with a number of linear scales hinged above it. Pinhole sights on the upper arm allowed it to be pointed accurately at the astronomical target. Plumb bob lines descended from the scales above and intersected with linear scales marked on the horizontal scales below. [ 5 ] These allowed measures to be read, not as angles, but as trigonometric ratios.
To celebrate the 600th anniversary of the Rectangulus in 1926 a replica was constructed. [ 2 ] [ 6 ] This is now in the History of Science Museum, Oxford . [ 7 ] | https://en.wikipedia.org/wiki/Rectangulus |
Rectisol is the trade name for an acid gas removal process that uses methanol as a solvent to separate acid gases such as hydrogen sulfide and carbon dioxide from valuable feed gas streams. [ 1 ] By doing so, the feed gas is made more suitable for combustion and/or further processing. Rectisol is used most often to treat synthesis gas (primarily hydrogen and carbon monoxide ) produced by gasification of coal or heavy hydrocarbons , as the methanol solvent is well able to remove trace contaminants such as ammonia , mercury , and hydrogen cyanide usually found in these gases. As an acid gas and large component of valuable feed gas streams, CO 2 is separated during the methanol solvent regeneration.
In the Rectisol process (licensed by both Linde AG and Air Liquide ), cold methanol at approximately –40 °F (–40 °C) dissolves (absorbs) the acid gases from the feed gas at relatively high pressure, usually 400 to 1000 psia (2.76 to 6.89 MPa). The rich solvent containing the acid gases is then let down in pressure to release and recover the acid gases. The Rectisol process can operate selectively to recover hydrogen sulfide and carbon dioxide as separate streams, so that the hydrogen sulfide can be sent to either a Claus unit for conversion to elemental sulfur or a WSA Process unit to recover sulfuric acid , while at the same time the carbon dioxide can be sequestered ( CCS ) or used for enhanced oil recovery .
Rectisol, like Selexol and Purisol , is a physical solvent, unlike amine based acid gas removal solvents that rely on a chemical reaction with the acid gases. Methanol as a solvent is inexpensive compared to the proprietary Selexol and Purisol solvents. The Rectisol process requires more electrical energy for refrigeration to maintain the low temperatures required but it also requires less steam energy for regeneration. Although capital costs for methanol solvent (Rectisol) units are higher than proprietary solvent units, methanol as a cold, physical solvent can remove greater percentages of acid gas components providing a higher purity cleaned gas.
The Rectisol process is very flexible and can be configured to address the separation of synthesis gas into various components, depending on the final products that are desired from the gas. It is very suitable to complex schemes where a combination of products are needed, for example hydrogen, carbon monoxide, ammonia and methanol synthesis gases and fuel gas side streams. [ 2 ] | https://en.wikipedia.org/wiki/Rectisol |
The rectoanal inhibitory reflex ( RAIR ), also known as the anal sampling mechanism , anal sampling reflex , rectosphincteric reflex , or anorectal sampling reflex , is a reflex characterized by a transient involuntary relaxation of the internal anal sphincter in response to distention of the rectum . [ 1 ] The RAIR provides the upper anal canal with the ability to discriminate between flatus and fecal material .
The ability of the rectum to discriminate between gaseous, liquid and solid contents is essential to the ability to voluntarily control defecation . The RAIR allows for voluntary flatulation to occur without also eliminating solid waste, irrespective of the presence of fecal material in the anal canal. [ 2 ]
The physiological basis for the RAIR is poorly understood, [ 3 ] but it is thought to involve a coordinated response by the internal anal sphincter to rectal distention with recovery of anal pressure from the distal to the proximal sphincter. [ 1 ] Mediated by the autonomic nervous system , the afferent limb of this reflex depends upon an intact network of interstitial cells of Cajal in the internal anal sphincter. These cells, which are mediated at least in part by nitric oxide , provide inhibitory innervation of the internal anal sphincter. [ 4 ]
Impairment of this reflex can result in fecal incontinence . [ 5 ] [ 6 ] The absence of a RAIR is pathognomonic for Hirschsprung's disease . [ 7 ]
This human digestive system article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rectoanal_inhibitory_reflex |
A recuperative multi-tube cooler is a rotary drum cooler used for continuous processes in chemical engineering .
Recuperative multi-tube coolers essentially exist of a turning rotor which is mostly driven via chain. At the ends of the rotor are stiff cases for product feed and outlet. The rotor is supported on running treads, as it is typical for rotary drums.
The interior of the rotor exists of several tubes in a revolvertype (or planetary) arrangement. The tubes are completely surrounded by a jacket.
According to requirements recuperative multi-tube coolers are built with diameters between 1.0 and 4.0 m and lengths from 10 to 40 m.
Recuperative multi-tube coolers work with indirect air cooling. That means, that there is no direct contact between the product to be cooled and the cooling air. The heat is exchanged indirectly via thermal conduction.
Ambient air is used as cooling air, which is drawn between the jacket and the tubes. Product and cooling air pass through the cooler in counterflow.
The product to be cooled falls directly into the product feed housing. By the rotary movement and a little slope of the rotor, the product is conveyed through the cooler. The rotation causes a permanent mixing of the product in the tubes and hence a good heat transfer.
Due to the indirect method of operation, the coolers provide hot and clean air that can be reused as energy. This opportunity of recovering energy is where the term recuperative results from.
The coolers can be used for the cooling of free flowing, fine grained bulk material. They are especially used when consumers of the recovered hot air are close-by. Usual this is the case in calcination processes after hotgas fired rotary kilns in or similar.
The hot air is used as preheated supply of combustion air in the kilns. The consumption of primary energy can be reduced seriously.
The coolers are mostly used in the pigment industry, e.g. for cooling of titandioxide pigments after calcination.
The entry temperatures of the products can reach up to 1000 °C. | https://en.wikipedia.org/wiki/Recuperative_multi-tube_cooler |
A recuperator (electro- end carbogidro-) - is a special purpose counter-flow energy recovery heat exchanger positioned within the supply and exhaust air streams of an air handling system, or in the exhaust gases of an industrial process, in order to recover the waste heat . Generally, they are used to extract heat from the exhaust and use it to preheat air entering the combustion system. In this way they use waste energy to heat the air, offsetting some of the fuel, and thereby improve the energy efficiency of the system as a whole.
In many types of processes, combustion is used to generate heat, and the recuperator serves to recuperate, or reclaim this heat, in order to reuse or recycle it. The term recuperator refers as well to liquid-liquid counterflow heat exchangers used for heat recovery in the chemical and refinery industries and in closed processes such as ammonia-water or LiBr-water absorption refrigeration cycle.
Recuperators are often used in association with the burner portion of a heat engine , to increase the overall efficiency. For example, in a gas turbine engine, air is compressed, mixed with fuel, which is then burned and used to drive a turbine. The recuperator transfers some of the waste heat in the exhaust to the compressed air, thus preheating it before entering the fuel burner stage. Since the gases have been pre-heated, less fuel is needed to heat the gases up to the turbine inlet temperature. By recovering some of the energy usually lost as waste heat, the recuperator can make a heat engine or gas turbine significantly more efficient.
Normally the heat transfer between airstreams provided by the device is termed as " sensible heat ", which is the exchange of energy, or enthalpy , resulting in a change in temperature of the medium (air in this case), but with no change in moisture content. However, if moisture or relative humidity levels in the return air stream are high enough to allow condensation to take place in the device, then this will cause " latent heat " to be released and the heat transfer material will be covered with a film of water. Despite a corresponding absorption of latent heat, as some of the water film is evaporated in the opposite airstream, the water will reduce the thermal resistance of the boundary layer of the heat exchanger material and thus improve the heat transfer coefficient of the device, and hence increase efficiency. The energy exchange of such devices now comprises both sensible and latent heat transfer; in addition to a change in temperature, there is also a change in moisture content of the exhaust air stream.
However, the film of condensation will also slightly increase pressure drop through the device, and depending upon the spacing of the matrix material, this can increase resistance by up to 30%. If the unit is not laid to falls, and the condensate not allowed to drain properly, this will increase fan energy consumption and reduce the seasonal efficiency of the device.
In heating, ventilation and air-conditioning systems, HVAC , recuperators are commonly used to re-use waste heat from exhaust air normally expelled to atmosphere . Devices typically comprises a series of parallel plates of aluminium , plastic , stainless steel , or synthetic fiber , copper alternate pairs of which are enclosed on two sides to form twin sets of ducts at right angles to each other, and which contain the supply and extract air streams. In this manner heat from the exhaust air stream is transferred through the separating plates, and into the supply air stream. Manufacturers claim gross efficiencies of up to 95% depending upon the specification of the unit.
The characteristics of this device are attributable to the relationship between the physical size of the unit, in particular the air path distance, and the spacing of the plates. For an equal air pressure drop through the device, a small unit will have a narrower plate spacing and a lower air velocity than a larger unit, but both units may be just as efficient. Because of the cross-flow design of the unit, its physical size will dictate the air path length, and as this increases, heat transfer will increase but pressure drop will also increase, and so plate spacing is increased to reduce pressure drop, but this in turn will reduce heat transfer.
As a general rule a recuperator selected for a pressure drop of between 150–250 pascals (0.022–0.036 psi) will have a good efficiency, while having a small effect on fan power consumption, but will have in turn a higher seasonal efficiency than that for physically smaller, but higher pressure drop recuperator.
When heat recovery is not required, it is typical for the device to be bypassed by use of dampers arranged within the ventilation distribution system. Assuming the fans are fitted with inverter speed controls, set to maintain a constant pressure in the ventilation system, then the reduced pressure drop leads to a slowing of the fan motor and thus reducing power consumption, and in turn improves the seasonal efficiency of the system.
Recuperators have also been used to recover heat from waste gasses to preheat combustion air and fuel for many years by metallic recuperators to reduce energy costs and carbon footprint of operation. Compared to alternatives such as regenerative furnaces, initial costs are lesser, there are no valves to be switching back and forth, there are no induced-draft fans and it does not require a web of gas ducts spread up all over the furnace.
Historically the recovery ratios of recuperators compared to regenerative burners were low. However, recent improvements to technology have allowed recuperators to recover 70-80% of the waste heat and pre-heated air up to 850–900 °C (1,560–1,650 °F) is now possible.
Recuperators can be used to increase the efficiency of gas turbines for power generation, provided the exhaust gas is hotter than the compressor outlet temperature. The exhaust heat from the turbine is used to pre-heat the air from the compressor before further heating in the combustor, reducing the fuel input required. The larger the temperature difference between turbine out and compressor out, the greater the benefit from the recuperator. [ 1 ] Therefore, microturbines (<1 MW), which typically have low pressure ratios, have the most to gain from the use of a recuperator. In practice, a doubling of efficiency is possible through the use of a recuperator. [ 2 ] The major practical challenge for a recuperator in microturbine applications is coping with the exhaust gas temperature, which can exceed 750 °C (1,380 °F). | https://en.wikipedia.org/wiki/Recuperator |
Recurrence period density entropy ( RPDE ) is a method, in the fields of dynamical systems , stochastic processes , and time series analysis , for determining the periodicity, or repetitiveness of a signal.
Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information , except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity , Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal. [ 1 ] [ 2 ]
The RPDE value H n o r m {\displaystyle \scriptstyle H_{\mathrm {norm} }} is a scalar in the range zero to one. For purely periodic signals, H n o r m = 0 {\displaystyle \scriptstyle H_{\mathrm {norm} }=0} , whereas for purely i.i.d. , uniform white noise , H n o r m ≈ 1 {\displaystyle \scriptstyle H_{\mathrm {norm} }\approx 1} . [ 2 ]
The RPDE method first requires the embedding of a time series in phase space , which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors:
for each value x n in the time series, where M is the embedding dimension , and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point X n {\displaystyle \scriptstyle \mathbf {X} _{n}} in the phase space, an ε {\displaystyle \varepsilon } -neighbourhood (an m -dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram . This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P ( T ). The normalised entropy of this density:
is the RPDE value, where T max {\displaystyle \scriptstyle T_{\max }} is the largest recurrence value (typically on the order of 1000 samples). [ 2 ] Note that RPDE is intended to be applied to both deterministic and stochastic signals, therefore, strictly speaking, Taken's original embedding theorem does not apply, and needs some modification. [ 3 ]
RPDE has the ability to detect subtle changes in natural biological time series such as the breakdown of regular periodic oscillation in abnormal cardiac function which are hard to detect using classical signal processing tools such as the Fourier transform or linear prediction . The recurrence period density is a sparse representation for nonlinear, non-Gaussian and nondeterministic signals, whereas the Fourier transform is only sparse for purely periodic signals. | https://en.wikipedia.org/wiki/Recurrence_period_density_entropy |
Recurrence quantification analysis ( RQA ) is a method of nonlinear data analysis (cf. chaos theory ) for the investigation of dynamical systems . It quantifies the number and duration of recurrences of a dynamical system presented by its phase space trajectory. [ 1 ]
The recurrence quantification analysis (RQA) was developed in order to quantify differently appearing recurrence plots (RPs), based on the small-scale structures therein. [ 2 ] Recurrence plots are tools which visualise the recurrence behaviour of the phase space trajectory x → ( i ) {\displaystyle {\vec {x}}(i)} of dynamical systems : [ 3 ]
where Θ : R → { 0 , 1 } {\displaystyle \Theta :\mathbf {R} \rightarrow \{0,1\}} is the Heaviside function and ε {\displaystyle \varepsilon } a predefined tolerance.
Recurrence plots mostly contain single dots and lines which are parallel to the mean diagonal ( line of identity , LOI) or which are vertical/horizontal. Lines parallel to the LOI are referred to as diagonal lines and the vertical structures as vertical lines . Because an RP is usually symmetric, horizontal and vertical lines correspond to each other, and, hence, only vertical lines are considered. The lines correspond to a typical behaviour of the phase space trajectory: whereas the diagonal lines represent such segments of the phase space trajectory which run parallel for some time, the vertical lines represent segments which remain in the same phase space region for some time. [ 1 ]
If only a univariate time series u ( t ) {\displaystyle u(t)} is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem ):
where u ( i ) {\displaystyle u(i)} is the time series (with t = i Δ t {\displaystyle t=i\Delta t} and Δ t {\displaystyle \Delta t} the sampling time), m {\displaystyle m} the embedding dimension, and τ {\displaystyle \tau } the time delay. However, pPhase space reconstruction is not essential part of the RQA (although often stated in literature), because it is based on phase space trajectories which could be derived from the system's variables directly (e.g., from the three variables of the Lorenz system ) or from multivariate data.
The RQA quantifies the small-scale structures of recurrence plots, which present the number and duration of the recurrences of a dynamical system. The measures introduced for the RQA were developed heuristically between 1992 and 2002. [ 4 ] [ 5 ] [ 6 ] They are actually measures of complexity . The main advantage of the RQA is that it can provide useful information even for short and non-stationary data, where other methods fail.
RQA can be applied to almost every kind of data. It is widely used in physiology , but was also successfully applied on problems from engineering , chemistry , Earth sciences etc. [ 2 ] Further extensions and variations of measures for quantifying recurrence properties have been proposed to address specific research questions. RQA measures are also combined with machine learning approaches for classification tasks. [ 7 ]
The simplest measure is the recurrence rate , which is the density of recurrence points in a recurrence plot: [ 1 ]
The recurrence rate corresponds with the probability that a specific state will recur. It is almost equal with the definition of the correlation sum , where the LOI is excluded from the computation.
The next measure is the percentage of recurrence points which form diagonal lines in the recurrence plot of minimal length ℓ min {\displaystyle \ell _{\min }} : [ 5 ]
where P ( ℓ ) {\displaystyle P(\ell )} is the frequency distribution of the lengths ℓ {\displaystyle \ell } of the diagonal lines (i.e., it counts how many instances have length ℓ {\displaystyle \ell } ). This measure is called determinism and is related with the predictability of the dynamical system , because white noise has a recurrence plot with almost only single dots and very few diagonal lines, whereas a deterministic process has a recurrence plot with very few single dots but many long diagonal lines.
The number of recurrence points which form vertical lines can be quantified in the same way: [ 6 ]
where P ( v ) {\displaystyle P(v)} is the frequency distribution of the lengths v {\displaystyle v} of the vertical lines, which have at least a length of v min {\displaystyle v_{\min }} . This measure is called laminarity and is related with the amount of laminar phases in the system ( intermittency ).
The lengths of the diagonal and vertical lines can be measured as well. The averaged diagonal line length [ 5 ]
is related with the predictability time of the dynamical system
and the trapping time , measuring the average length
of the vertical lines, [ 6 ]
is related with the laminarity time of the dynamical system, i.e. how long the system remains in a specific state. [ 6 ]
Because the length of the diagonal lines is related on the time how long segments of the phase space trajectory run parallel, i.e. on the divergence behaviour of the trajectories, it was sometimes stated that the reciprocal of the maximal length of the diagonal lines (without LOI) would be an estimator for the positive maximal Lyapunov exponent of the dynamical system. Therefore, the maximal diagonal line length L max {\displaystyle L_{\max }} or the divergence : [ 1 ]
are also measures of the RQA. However, the relationship between these measures with the positive maximal Lyapunov exponent is not as easy as stated, but even more complex (to calculate the Lyapunov exponent from an RP, the whole frequency distribution of the diagonal lines has to be considered). The divergence can have the trend of the positive maximal Lyapunov exponent, but not more. Moreover, also RPs of white noise processes can have a really long diagonal line, although very seldom, just by a finite probability. Therefore, the divergence cannot reflect the maximal Lyapunov exponent.
The probability p ( ℓ ) {\displaystyle p(\ell )} that a diagonal line has exactly length ℓ {\displaystyle \ell } can be estimated from the frequency distribution P ( ℓ ) {\displaystyle P(\ell )} with p ( ℓ ) = P ( ℓ ) ∑ ℓ = l min N P ( ℓ ) {\displaystyle p(\ell )={\frac {P(\ell )}{\sum _{\ell =l_{\min }}^{N}P(\ell )}}} . The Shannon entropy of this probability, [ 5 ]
reflects the complexity of the deterministic structure in the system. However, this entropy depends sensitively on the bin number and, thus, may differ for different realisations of the same process, as well as for different data preparations.
The last measure of the RQA quantifies the thinning-out of the recurrence plot. The trend is the regression coefficient of a linear relationship between the density of recurrence points in a line parallel to the LOI and its distance to the LOI. More exactly, consider the recurrence rate in a diagonal line parallel to LOI of distance k ( diagonal-wise recurrence rate or τ-recurrence rate ): [ 1 ]
then the trend is defined by [ 5 ]
with ⟨ ⋅ ⟩ {\displaystyle \langle \cdot \rangle } as the average value and N ~ < N {\displaystyle {\tilde {N}}<N} . This latter relation should ensure to avoid the edge effects of too low recurrence point densities in the edges of the recurrence plot. The measure trend provides information about the stationarity of the system.
Similar to the τ {\displaystyle \tau } -recurrence rate, the other measures based on the diagonal lines (DET, L, ENTR) can be defined diagonal-wise. These definitions are useful to study interrelations or synchronisation between different systems (using recurrence plots or cross recurrence plots ). [ 8 ]
Instead of computing the RQA measures of the entire recurrence plot, they can be computed in small windows moving over the recurrence plot along the LOI. This provides time-dependent RQA measures which allow detecting, e.g., chaos-chaos transitions. [ 9 ] [ 1 ] Note: the choice of the size of the window can strongly influence the measure trend . | https://en.wikipedia.org/wiki/Recurrence_quantification_analysis |
Recurrent evolution also referred to as repeated [ 1 ] [ 2 ] or replicated [ 3 ] evolution is the repeated evolution of a particular trait, character, or mutation . [ 4 ] Most evolution is the result of drift , often interpreted as the random chance of some alleles being passed down to the next generation and others not. Recurrent evolution is said to occur when patterns emerge from this stochastic process when looking across multiple distinct populations. These patterns are of particular interest to evolutionary biologists , as they can demonstrate the underlying forces governing evolution.
Recurrent evolution is a broad term, but it is usually used to describe recurring regimes of selection within or across lineages . [ 5 ] While most commonly used to describe recurring patterns of selection, it can also be used to describe recurring patterns of mutation ; for example, transitions are more common than transversions . [ 5 ] The concept encompasses both convergent evolution and parallel evolution ; it can be used to describe the observation of similar repeating changes through directional selection as well as the observation of highly conserved phenotypes or genotypes across lineages through continuous purifying selection over large periods of evolutionary time. [ 5 ]
Recurrent changes may be observed at the phenotype level or the genotype level. At the phenotype level, recurrent evolution can be observed across a continuum of levels, which for simplicity can be broken down into molecular phenotype, cellular phenotype, and organismal phenotype. At the genotype level, recurrent evolution can only be detected using DNA sequencing data. The same or similar sequences appearing in the genomes of different lineages indicates recurrent genomic evolution may have taken place. Recurrent genomic evolution can also occur within a lineage; an example of this would include some types of phase variation that involve highly directed changes at the DNA sequence level. The evolution of different forms of phase variation in separate lineages represents convergent and recurrent evolution toward increased evolvability . In organisms with long generation times, any potential recurrent genomic evolution within a lineage would be difficult to detect. Recurrent evolution has been studied most extensively at the organismal level, but with the advent of cheaper and faster sequencing technologies more attention is being paid to recurrent evolution at the genomic level.
The distinction between convergent and parallel evolution is somewhat unresolved in evolutionary biology. Some authors have claimed it is a false dichotomy , while others have argued that there are important distinctions. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] These debates are important when considering recurrent evolution because the basis for the distinction is in the degree of phylogenetic relatedness among the organisms being considered. While convergent and parallel evolution can both be interpreted as forms of recurrent evolution, they involve multiple lineages whereas recurrent evolution can also take place within a single lineage. [ 5 ] [ 11 ]
As mentioned before, recurrent evolution within a lineage can be difficult to detect in organisms with long generation times; however, paleontological evidence can be used to show recurrent phenotypic evolution within a lineage. [ 11 ] The distinction between recurrent evolution across lineages and recurrent evolution within a lineage can be blurred because lineages do not have a set size and convergent or parallel evolution takes place among lineages that are all part of or within the same greater lineage. When speaking of recurrent evolution within a lineage, the simplest example is that given above, of the "on-off switch" used by bacteria in phase variation, but it can also involve phenotypic swings back and forth over longer periods of evolutionary history. [ 11 ] These may be caused by environmental swings – for example, natural fluctuations in the climate, or a pathogenic bacterium moving between hosts – and represent the other major source of recurrent evolution. [ 11 ] Recurrent evolution caused by convergent and parallel evolution, and recurrent evolution caused by environmental swings, are not necessarily mutually exclusive. If the environmental swings have the same effect on the phenotypes of different species, they could potentially evolve in parallel back and forth together through each swing.
On the island of Bermuda , the shell size of the land snail Poecilozonites has increased during glacial periods and shrunk again during warmer periods. It has been proposed that this is due to the increased size of the island during glacial periods (as a consequence of lower sea levels), which results in more large vertebrate predators and creates a selection pressure for larger shell size in the snails. [ 11 ]
In eusocial insects, new colonies are usually formed by a solitary queen, though this is not always the case. Dependent colony formation, when new colonies are formed by more than one individual, has evolved recurrently multiple times in ants, bees, and wasps. [ 12 ]
Recurrent evolution of polymorphisms in colonial invertebrate bryozoans of the order Cheilostomatida has given rise to zooid polymorphs and certain skeletal structures several times in evolutionary history. [ 13 ]
Neotropical tanagers of the genera Diglossa and Diglossopis , known as flowerpiercers, have undergone recurrent evolution of divergent bill types. [ 14 ]
There is evidence for at least 133 transitions between dioecy and hermaphroditism in the sexual systems of bryophytes . Additionally, the transition rate from hermaphroditism to dioecy was approximately twice the rate in the reverse direction, suggesting greater diversification among hermaphrodites and demonstrating the recurrent evolution of dioecy in mosses. [ 15 ]
C4 photosynthesis has evolved over 60 times in different plant lineages. [ 16 ] This has occurred through the repurposing of genes present in a C3 photosynthetic common ancestor, altering levels and patterns of gene expression , and adaptive changes in the protein-coding region. [ 16 ] Recurrent lateral gene transfer has also played a role in optimizing the C4 pathway by providing better adapted C4 genes to the plants. [ 16 ]
Certain genetic mutations occur with measurable and consistent frequency. [ 17 ] Deleterious and neutral alleles can increase in frequency if the mutation rate to this phenotype is sufficiently higher than the reverse mutation rate; however, this appears to be rare. Beyond creating new genetic variation for selection to act upon, mutations plays a primary role in evolution when mutations in one direction are "weeded out by natural selection" and mutations in the other direction are neutral. [ 17 ] This is known as purifying selection when it acts to maintain functionally important characters but also results in the loss or diminished size of useless organs as the functional constraint is lifted. An example of this is the diminished size of the Y chromosome in mammals, which can be attributed to recurrent mutations and recurrent evolution. [ 17 ]
The existence of mutational "hotspots" within the genome often gives rise to recurrent evolution. Hotspots can arise at certain nucleotide sequences because of interactions between the DNA and DNA repair , replication , and modification enzymes. [ 18 ] These sequences can act like fingerprints to help researchers locate mutational hotspots. [ 18 ]
Cis-regulatory elements are frequent targets of evolution resulting in varied morphology. [ 19 ] When looking at long-term evolution, mutations in cis-regulatory regions appear to be even more common. [ 20 ] In other words, more interspecific morphological differences are caused by mutations in cis-regulatory regions than intraspecific differences. [ 19 ]
Across Drosophila species, highly conserved blocks not only in the histone fold domain but also in the N-terminal tail of centromeric histone H3 (CenH3) demonstrate recurrent evolution by purifying selection. In fact very similar oligopeptides in the N-terminal tails of CenH3 have also been observed in humans and in mice. [ 21 ]
Many divergent eukaryotic lineages have recurrently evolved highly AT-rich genomes. [ 5 ] GC-rich genomes are rarer among eukaryotes, but when they evolve independently in two different species the recurrent evolution of similar preferential codon usages will usually result. [ 5 ]
"Generally, regulatory genes occupying nodal position in gene regulatory networks , and which function as morphogenetic switches, can be anticipated to be prime targets for evolutionary changes and therefore repeated evolution." [ 22 ] | https://en.wikipedia.org/wiki/Recurrent_evolution |
In mathematics and physics , a recurrent tensor , with respect to a connection ∇ {\displaystyle \nabla } on a manifold M , is a tensor T for which there is a one-form ω on M such that
An example for recurrent tensors are parallel tensors which are defined by
with respect to some connection ∇ {\displaystyle \nabla } .
If we take a pseudo-Riemannian manifold ( M , g ) {\displaystyle (M,g)} then the metric g is a parallel and therefore recurrent tensor with respect to its Levi-Civita connection , which is defined via
and its property to be torsion-free.
Parallel vector fields ( ∇ X = 0 {\displaystyle \nabla X=0} ) are examples of recurrent tensors that find importance in mathematical research. For example, if X {\displaystyle X} is a recurrent non-null vector field on a pseudo-Riemannian manifold satisfying
for some closed one-form ω {\displaystyle \omega } , then X can be rescaled to a parallel vector field. [ 1 ] In particular, non-parallel recurrent vector fields are null vector fields.
Another example appears in connection with Weyl structures . Historically, Weyl structures emerged from the considerations of Hermann Weyl with regards to properties of parallel transport of vectors and their length. [ 2 ] By demanding that a manifold have an affine parallel transport in such a way that the manifold is locally an affine space , it was shown that the induced connection had a vanishing torsion tensor
Additionally, he claimed that the manifold must have a particular parallel transport in which the ratio of two transported vectors is fixed. The corresponding connection ∇ ′ {\displaystyle \nabla '} which induces such a parallel transport satisfies
for some one-form φ {\displaystyle \varphi } . Such a metric is a recurrent tensor with respect to ∇ ′ {\displaystyle \nabla '} . As a result, Weyl called the resulting manifold ( M , g ) {\displaystyle (M,g)} with affine connection ∇ {\displaystyle \nabla } and recurrent metric g {\displaystyle g} a metric space. In this sense, Weyl was not just referring to one metric but to the conformal structure defined by g {\displaystyle g} .
Under the conformal transformation g → e λ g {\displaystyle g\rightarrow e^{\lambda }g} , the form φ {\displaystyle \varphi } transforms as φ → φ − d λ {\displaystyle \varphi \rightarrow \varphi -d\lambda } . This induces a canonical map F : [ g ] → Λ 1 ( M ) {\displaystyle F:[g]\rightarrow \Lambda ^{1}(M)} on ( M , [ g ] ) {\displaystyle (M,[g])} defined by
where [ g ] {\displaystyle [g]} is the conformal structure. Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle F} is called a Weyl structure, [ 3 ] which more generally is defined as a map with property
One more example of a recurrent tensor is the curvature tensor R {\displaystyle {\mathcal {R}}} on a recurrent spacetime, [ 4 ] for which | https://en.wikipedia.org/wiki/Recurrent_tensor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.