text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Rice-hull bagwall construction is a system of building , [ 1 ] with results aesthetically similar to the use of earthbag or cob construction. [ 2 ] Woven polypropylene bags (or tubes) are tightly filled with raw rice hulls , and these are stacked up, layer upon layer, with strands of four-pronged barbed wire between. A surrounding "cage" composed of mats of welded or woven steel mesh (remesh or "poultry wire") on both sides (wired together between bag layers with, for example, rebar tie-wire) is then stuccoed to form building walls.
Mixing rice hulls in boric acid and borax solution results in fireproofing . A similar result can be achieved if placed on top of poured ingot , which applies direct heat until turned into ash. In addition, its ash form does not appeal to vermin . [ 3 ] | https://en.wikipedia.org/wiki/Rice-hull_bagwall_construction |
The Rice University Department of Electrical and Computer Engineering is one of nine academic departments at the George R. Brown School of Engineering at Rice University . Ashutosh Sabharwal is the Department Chair. Originally the Rice Department of Electrical Engineering , it was renamed in 1984 to Electrical and Computer Engineering . [ 1 ]
Rice ECE Faculty perform research in the following areas: Computer Engineering ; Data Science, Neuroengineering ; Photonics , Electronics and Nano-devices , and Systems. [ 2 ] Rice has a long history in digital signal processing (DSP) dating back to its inception in the late 1960s.
Computer Engineering faculty have a research focus in analog and mixed-signal design, VLSI signal processing, computer architecture and embedded systems , biosensors and computer vision , and hardware security and storage systems, including applications to education. Biosensors and mobile wireless healthcare are growing application areas in embedded systems research. Smartphones with imaging devices are leading to new areas in computer vision and sensing. In the area of computer architecture, research interests include parallel computing , large-scale storage systems, and resource scheduling for performance and power.
Data Science faculty integrate the foundations, tools and techniques involving data acquisition (sensors and systems), data analytics ( machine learning , statistics ), data storage and computing infrastructure (GPU/CPU computing, FPGAs , cloud computing , security and privacy) in order to enable meaningful extraction of actionable information from diverse and potentially massive data sources.
Neuroengineering faculty are members of the Rice Center for Neuroengineering , a collaborative effort with Texas Medical Center researchers. They develop technology for treating and diagnosing neural diseases . Current research areas include interrogating neural circuits at the cellular level, analyzing neuronal data in real-time, and manipulating healthy or diseased neural circuit activity and connectivity using nano electronics , optics , and emerging photonics technologies.
Photonics, Electronics and Nano-device researchers focus on nanophotonics and plasmonics , optical nanosensor and nano-actuator development, studies of new materials, in particular nanomaterials and magnetically active materials; imaging and image processing, including multispectral imaging and terahertz imaging ; ultrafast spectroscopy and dynamics; laser applications in remote and point sensing, especially for trace gas detection; [ 3 ] nanometer-scale characterization of surfaces, molecules, and devices; organic semiconductor devices; single-molecule transistors; and applications of Nanoshells in biomedicine.
Current Rice ECE Systems research spans a wide range of areas including image and video analysis, representation, and compression; wavelets and multiscale methods; statistical signal processing , pattern recognition , and learning theory; distributed signal processing and sensor networks; communication systems; computational neuroscience; and wireless networking. | https://en.wikipedia.org/wiki/Rice_University_Electrical_and_Computer_Engineering |
In computability theory , the Rice–Shapiro theorem is a generalization of Rice's theorem , named after Henry Gordon Rice and Norman Shapiro . It states that when a semi-decidable property of partial computable functions is true on a certain partial function, one can extract a finite subfunction such that the property is still true.
The informal idea of the theorem is that the "only general way" to obtain information on the behavior of a program is to run the program, and because a computation is finite, one can only try the program on a finite number of inputs.
A closely related theorem is the Kreisel-Lacombe-Shoenfield-Tseitin theorem (or KLST theorem ), which was obtained independently by Georg Kreisel , Daniel Lacombe and Joseph R. Shoenfield [ 1 ] , and by Grigori Tseitin [ 2 ] .
Rice-Shapiro theorem. [ 3 ] : 482 [ 4 ] [ 5 ] Let P {\displaystyle P} be a set of partial computable functions such that the index set of P {\displaystyle P} (i.e., the set of indices e {\displaystyle e} such that ϕ e ∈ P {\displaystyle \phi _{e}\in P} , for some fixed admissible numbering ϕ {\displaystyle \phi } ) is semi-decidable . Then for any partial computable function f {\displaystyle f} , it holds that P {\displaystyle P} contains f {\displaystyle f} if and only if P {\displaystyle P} contains a finite subfunction of f {\displaystyle f} (i.e., a partial function defined in finitely many points, which takes the same values as f {\displaystyle f} on those points).
Kreisel-Lacombe-Shoenfield-Tseitin theorem. [ 3 ] : 362 [ 1 ] [ 2 ] [ 6 ] [ 7 ] [ 8 ] : 440 Let P {\displaystyle P} be a set of total computable functions such that the index set of P {\displaystyle P} is decidable with a promise that the input is the index of a total computable function (i.e., there is a partial computable function D {\displaystyle D} which, given an index e {\displaystyle e} such that ϕ e {\displaystyle \phi _{e}} is total, returns 1 if ϕ e ∈ P {\displaystyle \phi _{e}\in P} and 0 otherwise; D ( e ) {\displaystyle D(e)} need not be defined if ϕ e {\displaystyle \phi _{e}} is not total). We say that two total functions f {\displaystyle f} , g {\displaystyle g} "agree until n {\displaystyle n} " if f ( k ) = g ( k ) {\displaystyle f(k)=g(k)} holds for all k ≤ n {\displaystyle k\leq n} . Then for any total computable function f {\displaystyle f} , there exists n {\displaystyle n} such that for all total computable function g {\displaystyle g} which agrees with f {\displaystyle f} until n {\displaystyle n} , we have f ∈ P ⟺ g ∈ P {\displaystyle f\in P\iff g\in P} .
By the Rice-Shapiro theorem, it is neither semi-decidable nor co-semi-decidable whether a given program:
By the Kreisel-Lacombe-Shoenfield-Tseitin theorem, it is undecidable whether a given program which is assumed to always terminate :
The two theorems are closely related, and also relate to Rice's theorem . Specifically:
It is natural to wonder what can be said about semi-decidable sets of total computable functions. Perhaps surprisingly, these need not verify the conclusion of the Rice-Shapiro and Kreisel-Lacombe-Shoenfield-Tseitin theorems. The following counterexample is due to Richard M. Friedberg . [ 9 ] [ 8 ] : 444
Let Q {\displaystyle Q} be the set of total computable functions f : N → N {\displaystyle f:\mathbb {N} \to \mathbb {N} } such that f {\displaystyle f} is not the constant zero function and, defining n {\displaystyle n} to be the maximum index such that f ( n ) {\displaystyle f(n)} is zero, there exists a program of code e ≤ n {\displaystyle e\leq n} such that ϕ e ( i ) {\displaystyle \phi _{e}(i)} is defined and equal to f ( i ) {\displaystyle f(i)} for each i ≤ n + 1 {\displaystyle i\leq n+1} . Let P {\displaystyle P} be the set Q {\displaystyle Q} with the constant zero function added.
On the one hand, P {\displaystyle P} contains the constant zero function by definition, yet there is no n {\displaystyle n} such that if a total computable g {\displaystyle g} agrees with the constant zero function until n {\displaystyle n} then g ∈ P {\displaystyle g\in P} . Indeed, given n {\displaystyle n} , we can define a total function g {\displaystyle g} by setting g ( n + 1 ) {\displaystyle g(n+1)} to some value larger than every ϕ e ( n + 1 ) {\displaystyle \phi _{e}(n+1)} for e ≤ n + 1 {\displaystyle e\leq n+1} such that ϕ e ( n + 1 ) {\displaystyle \phi _{e}(n+1)} is defined, and g ( n ′ ) = 0 {\displaystyle g(n')=0} for n ′ ≠ n + 1 {\displaystyle n'\neq n+1} . The function g {\displaystyle g} is zero except on the value n + 1 {\displaystyle n+1} , thus computable, it agrees with the zero function up to n {\displaystyle n} , but it does not belong to P {\displaystyle P} by construction.
On the other hand, given a program e {\displaystyle e} and a promise that ϕ e {\displaystyle \phi _{e}} is total, it is possible to semi-decide whether ϕ e ∈ P {\displaystyle \phi _{e}\in P} by dovetailing, running one task to semi-decide ϕ e ∈ Q {\displaystyle \phi _{e}\in Q} , which can clearly be done, and another task to semi-decide whether ϕ e ( k ) = 0 {\displaystyle \phi _{e}(k)=0} for all k ≤ e {\displaystyle k\leq e} . This is correct because the zero function is detected by the second task, and conversely, if the second task returns true, then either ϕ e {\displaystyle \phi _{e}} is zero, or ϕ e {\displaystyle \phi _{e}} is only zero up to an index n {\displaystyle n} , which must satisfy e ≤ n {\displaystyle e\leq n} , which by definition of Q {\displaystyle Q} implies that ϕ e ∈ Q {\displaystyle \phi _{e}\in Q} .
Let P {\displaystyle P} be a set of partial computable functions with semi-decidable index set. We prove the two implications separately.
We first prove that if f {\displaystyle f} is a finite subfunction of g {\displaystyle g} and f ∈ P {\displaystyle f\in P} then g ∈ P {\displaystyle g\in P} . The hypothesis that f {\displaystyle f} is finite is in fact of no use.
The proof uses a diagonal argument typical of theorems in computability. We build a program p {\displaystyle p} as follows. This program takes an input x {\displaystyle x} . Using a standard dovetailing technique, p {\displaystyle p} runs two tasks in parallel.
If ϕ p ∉ P {\displaystyle \phi _{p}\notin P} , the first task can never finish, therefore the result of p {\displaystyle p} is entirely determined by the second task, thus ϕ p {\displaystyle \phi _{p}} is simply f {\displaystyle f} , a contradiction. This shows that ϕ p ∈ P {\displaystyle \phi _{p}\in P} .
Thus, both tasks are relevant; however, because f {\displaystyle f} is a subfunction of g {\displaystyle g} and the second task returns f ( x ) = g ( x ) {\displaystyle f(x)=g(x)} when f ( x ) {\displaystyle f(x)} is defined, while the first task returns g ( x ) {\displaystyle g(x)} when defined, the program in fact computes g {\displaystyle g} , i.e., ϕ p = g {\displaystyle \phi _{p}=g} , and therefore g ∈ P {\displaystyle g\in P} .
Conversely, we prove that if P {\displaystyle P} contains a partial computable function f {\displaystyle f} , then it contains a finite subfunction of f {\displaystyle f} . Let us fix f ∈ P {\displaystyle f\in P} . We build a program p {\displaystyle p} which takes input x {\displaystyle x} and runs the following steps:
Suppose that ϕ p ∉ P {\displaystyle \phi _{p}\notin P} . This implies that the semi-algorithm for semi-deciding P {\displaystyle P} used in the first step never returns true. Then, p {\displaystyle p} computes f {\displaystyle f} , and this contradicts the assumption f ∈ P {\displaystyle f\in P} . Thus, we must have ϕ p ∈ P {\displaystyle \phi _{p}\in P} , and the algorithm for semi-deciding P {\displaystyle P} returns true on p {\displaystyle p} after a certain number of steps n {\displaystyle n} . The partial function ϕ p {\displaystyle \phi _{p}} can only be defined on inputs x {\displaystyle x} such that x ≤ n {\displaystyle x\leq n} , and it returns f ( x ) {\displaystyle f(x)} on such inputs, so it is a finite subfunction of f {\displaystyle f} that belongs to P {\displaystyle P} .
A total function h : N → N {\displaystyle h:\mathbb {N} \to \mathbb {N} } is said to be ultimately zero if it always takes the value zero except for a finite number of points, i.e., there exists N {\displaystyle N} such that h ( n ) = 0 {\displaystyle h(n)=0} for all n ≥ N {\displaystyle n\geq N} . Note that such a function is always computable (it can be computed by simply checking if the input is in a certain predefined list, and otherwise returning zero).
We fix U {\displaystyle U} a computable enumeration of all total functions which are ultimately zero, that is, U {\displaystyle U} is such that:
We can build U {\displaystyle U} by standard techniques (e.g., for increasing N {\displaystyle N} , enumerate ultimately zero functions which are bounded by N {\displaystyle N} and zero on inputs larger than N {\displaystyle N} ).
Let P {\displaystyle P} be as in the statement of the theorem: a set of total computable functions such that there is an algorithm which, given an index e {\displaystyle e} and a promise that ϕ e {\displaystyle \phi _{e}} is total, decides whether ϕ e ∈ P {\displaystyle \phi _{e}\in P} .
We first prove a lemma: For all total computable function f {\displaystyle f} , and for all integer N {\displaystyle N} , there exists an ultimately zero function h {\displaystyle h} such that h {\displaystyle h} agrees with f {\displaystyle f} until N {\displaystyle N} , and f ∈ P ⟺ h ∈ P {\displaystyle f\in P\iff h\in P} .
To prove this lemma, fix a total computable function f {\displaystyle f} and an integer N {\displaystyle N} , and let B {\displaystyle B} be the boolean f ∈ P {\displaystyle f\in P} . Build a program p {\displaystyle p} which takes input x {\displaystyle x} and takes these steps:
Clearly, p {\displaystyle p} always terminates, i.e., ϕ p {\displaystyle \phi _{p}} is total. Therefore, the promise to P {\displaystyle P} run on p {\displaystyle p} is fulfilled.
Suppose for contradiction that one of f {\displaystyle f} and ϕ p {\displaystyle \phi _{p}} belongs to P {\displaystyle P} and the other does not, i.e., ( ϕ p ∈ P ) ≠ B {\displaystyle (\phi _{p}\in P)\neq B} . Then we see that p {\displaystyle p} computes f {\displaystyle f} , since P {\displaystyle P} does not return B {\displaystyle B} on p {\displaystyle p} no matter the amount of steps. Thus, we have f = ϕ p {\displaystyle f=\phi _{p}} , contradicting the fact that one of f {\displaystyle f} and ϕ p {\displaystyle \phi _{p}} belongs to P {\displaystyle P} and the other does not. This argument proves that f ∈ P ⟺ ϕ p ∈ P {\displaystyle f\in P\iff \phi _{p}\in P} . Then, the second step makes p {\displaystyle p} return zero for sufficiently large x {\displaystyle x} , thus ϕ p {\displaystyle \phi _{p}} is ultimately zero; and by construction (due to the first step), ϕ p {\displaystyle \phi _{p}} agrees with f {\displaystyle f} until N {\displaystyle N} . Therefore, we can take h = ϕ p {\displaystyle h=\phi _{p}} and the lemma is proved.
With the previous lemma, we can now prove the Kreisel-Lacombe-Shoenfield-Tseitin theorem. Again, fix P {\displaystyle P} as in the theorem statement, let f {\displaystyle f} be a total computable function and let B {\displaystyle B} be the boolean " f ∈ P {\displaystyle f\in P} ". Build the program p {\displaystyle p} which takes input x {\displaystyle x} and runs these steps:
We first prove that P {\displaystyle P} returns B {\displaystyle B} on p {\displaystyle p} . Suppose by contradiction that this is not the case ( P {\displaystyle P} returns ¬ B {\displaystyle \lnot B} , or P {\displaystyle P} does not terminate). Then p {\displaystyle p} actually computes f {\displaystyle f} . In particular, ϕ p {\displaystyle \phi _{p}} is total, so the promise to P {\displaystyle P} when run on p {\displaystyle p} is fulfilled, and P {\displaystyle P} returns the boolean ϕ p ∈ P {\displaystyle \phi _{p}\in P} , which is f ∈ P {\displaystyle f\in P} , i.e., B {\displaystyle B} , contradicting the assumption.
Let n {\displaystyle n} be the number of steps that P {\displaystyle P} takes to return B {\displaystyle B} on p {\displaystyle p} . We claim that n {\displaystyle n} satisfies the conclusion of the theorem: for all total computable function g {\displaystyle g} which agrees with f {\displaystyle f} until n {\displaystyle n} , it holds that f ∈ P ⟺ g ∈ P {\displaystyle f\in P\iff g\in P} . Assume for contradiction that there exists g {\displaystyle g} total computable which agrees with f {\displaystyle f} until n {\displaystyle n} and such that ( g ∈ P ) ≠ B {\displaystyle (g\in P)\neq B} .
Applying the lemma again, there exists k {\displaystyle k} such that U ( k ) {\displaystyle U(k)} agrees with g {\displaystyle g} until n {\displaystyle n} and g ∈ P ⟺ U ( k ) ∈ P {\displaystyle g\in P\iff U(k)\in P} . Since both U ( k ) {\displaystyle U(k)} and f {\displaystyle f} agree with g {\displaystyle g} until n {\displaystyle n} , U ( k ) {\displaystyle U(k)} also agrees with f {\displaystyle f} until n {\displaystyle n} , and since ( g ∈ P ) ≠ B {\displaystyle (g\in P)\neq B} and g ∈ P ⟺ U ( k ) ∈ P {\displaystyle g\in P\iff U(k)\in P} , we have ( U ( k ) ∈ P ) ≠ B {\displaystyle (U(k)\in P)\neq B} . Therefore, U ( k ) {\displaystyle U(k)} satisfies the conditions of the parallel search step in the program p {\displaystyle p} , namely: U ( k ) {\displaystyle U(k)} agrees with f {\displaystyle f} until n {\displaystyle n} and ( U ( k ) ∈ P ) ≠ B {\displaystyle (U(k)\in P)\neq B} . This proves that the search in the second step always terminates. We fix k {\displaystyle k} to be the value that it finds.
We observe that ϕ p = U ( k ) {\displaystyle \phi _{p}=U(k)} . Indeed, either the second step of p {\displaystyle p} returns U ( k ) ( x ) {\displaystyle U(k)(x)} , or the third step returns f ( x ) {\displaystyle f(x)} , but the latter case only happens for x ≤ n {\displaystyle x\leq n} , and we know that U ( k ) {\displaystyle U(k)} agrees with f {\displaystyle f} until n {\displaystyle n} .
In particular, ϕ p = U ( k ) {\displaystyle \phi _{p}=U(k)} is total. This makes the promise to P {\displaystyle P} run on p {\displaystyle p} fulfilled, therefore P {\displaystyle P} returns ϕ p ∈ P {\displaystyle \phi _{p}\in P} on p {\displaystyle p} .
We have found a contradiction: one the one hand, the boolean ϕ p ∈ P {\displaystyle \phi _{p}\in P} is the return value of P {\displaystyle P} on p {\displaystyle p} , which is B {\displaystyle B} , and on the other hand, we have ϕ p = U ( k ) {\displaystyle \phi _{p}=U(k)} , and we know that ( U ( k ) ∈ P ) ≠ B {\displaystyle (U(k)\in P)\neq B} .
For any finite unary function θ {\displaystyle \theta } on integers,
let C ( θ ) {\displaystyle C(\theta )} denote the 'frustum'
of all partial-recursive functions that are defined, and agree with θ {\displaystyle \theta } ,
on θ {\displaystyle \theta } 's domain.
Equip the set of all partial-recursive functions with the topology generated by these
frusta as base . Note that for every frustum C {\displaystyle C} , the index set I x ( C ) {\displaystyle Ix(C)} is
recursively enumerable. More generally it holds for every set A {\displaystyle A} of partial-recursive functions:
I x ( A ) {\displaystyle Ix(A)} is recursively enumerable iff A {\displaystyle A} is a recursively enumerable union of frusta.
The Kreisel-Lacombe-Shoenfield-Tseitin theorem has been applied to foundational problems in computational social choice (more broadly, algorithmic game theory ). For instance, Kumabe and Mihara [ 10 ] [ 11 ] apply this result to an investigation of the Nakamura numbers for simple games in cooperative game theory and social choice theory . | https://en.wikipedia.org/wiki/Rice–Shapiro_theorem |
In logic , Richard's paradox is a semantical antinomy of set theory and natural language first described by the French mathematician Jules Richard in 1905. The paradox is ordinarily used to motivate the importance of distinguishing carefully between mathematics and metamathematics .
Kurt Gödel specifically cites Richard's antinomy as a semantical analogue to his syntactical incompleteness result in the introductory section of " On Formally Undecidable Propositions in Principia Mathematica and Related Systems I ". The paradox was also a motivation for the development of predicative mathematics.
The original statement of the paradox, due to Richard (1905), is strongly related to Cantor's diagonal argument on the uncountability of the set of real numbers .
The paradox begins with the observation that certain expressions of natural language define real numbers unambiguously, while other expressions of natural language do not. For example, "The real number the integer part of which is 17 and the n th decimal place of which is 0 if n is even and 1 if n is odd" defines the real number 17.1010101... = 1693/99, whereas the phrase "the capital of England" does not define a real number, nor the phrase "the smallest positive integer not definable in under sixty letters" (see Berry's paradox ).
There is an infinite list of English phrases (such that each phrase is of finite length, but the list itself is of infinite length) that define real numbers unambiguously. We first arrange this list of phrases by increasing length, then order all phrases of equal length lexicographically , so that the ordering is canonical . This yields an infinite list of the corresponding real numbers: r 1 , r 2 , ... . Now define a new real number r as follows. The integer part of r is 0, the n th decimal place of r is 1 if the n th decimal place of r n is not 1, and the n th decimal place of r is 2 if the n th decimal place of r n is 1.
The preceding paragraph is an expression in English that unambiguously defines a real number r . Thus r must be one of the numbers r n . However, r was constructed so that it cannot equal any of the r n (thus, r is an undefinable number ). This is the paradoxical contradiction.
Richard's paradox results in an untenable contradiction, which must be analyzed to find an error.
The proposed definition of the new real number r clearly includes a finite sequence of characters, and hence it seems at first to be a definition of a real number. However, the definition refers to definability-in-English itself. If it were possible to determine which English expressions actually do define a real number, and which do not, then the paradox would go through. Thus the resolution of Richard's paradox is that there is not any way to unambiguously determine exactly which English sentences are definitions of real numbers (see Good 1966). That is, there is not any way to describe in a finite number of words how to tell whether an arbitrary English expression is a definition of a real number. This is not surprising, as the ability to make this determination would also imply the ability to solve the halting problem and perform any other non-algorithmic calculation that can be described in English.
A similar phenomenon occurs in formalized theories that are able to refer to their own syntax, such as Zermelo–Fraenkel set theory (ZFC). Say that a formula φ( x ) defines a real number if there is exactly one real number r such that φ( r ) holds. Then it is not possible to define, by ZFC, the set of all ( Gödel numbers of) formulas that define real numbers. For, if it were possible to define this set, it would be possible to diagonalize over it to produce a new definition of a real number, following the outline of Richard's paradox above. Note that the set of formulas that define real numbers may exist, as a set F ; the limitation of ZFC is that there is not any formula that defines F without reference to other sets. This is related to Tarski's undefinability theorem .
The example of ZFC illustrates the importance of distinguishing the metamathematics of a formal system from the statements of the formal system itself. The property D(φ) that a formula φ of ZFC defines a unique real number is not itself expressible by ZFC, but must be considered as part of the metatheory used to formalize ZFC. From this viewpoint, Richard's paradox results from treating a construction of the metatheory (the enumeration of all statements in the original system that define real numbers) as if that construction could be performed in the original system.
A variation of the paradox uses integers instead of real numbers, while preserving the self-referential character of the original. Consider a language (such as English) in which the arithmetical properties of integers are defined. For example, "the first natural number" defines the property of being the first natural number, one; and "divisible by exactly two natural numbers" defines the property of being a prime number (It is clear that some properties cannot be defined explicitly, since every deductive system must start with some axioms . But for the purposes of this argument, it is assumed that phrases such as "an integer is the sum of two integers" are already understood). While the list of all such possible definitions is itself infinite, it is easily seen that each individual definition is composed of a finite number of words, and therefore also a finite number of characters. Since this is true, we can order the definitions, first by length and then lexicographically .
Now, we may map each definition to the set of natural numbers , such that the definition with the smallest number of characters and alphabetical order will correspond to the number 1, the next definition in the series will correspond to 2, and so on. Since each definition is associated with a unique integer, then it is possible that occasionally the integer assigned to a definition fits that definition. If, for example, the definition "not divisible by any integer other than 1 and itself" happened to be 43rd, then this would be true. Since 43 is itself not divisible by any integer other than 1 and itself, then the number of this definition has the property of the definition itself. However, this may not always be the case. If the definition: "divisible by 3" were assigned to the number 58, then the number of the definition does not have the property of the definition itself, since 58 is itself not divisible by 3. This latter example will be termed as having the property of being Richardian . Thus, if a number is Richardian, then the definition corresponding to that number is a property that the number itself does not have. (More formally, " x is Richardian" is equivalent to " x does not have the property designated by the defining expression with which x is correlated in the serially ordered set of definitions".) Thus in this example, 58 is Richardian, but 43 is not.
Now, since the property of being Richardian is itself a numerical property of integers, it belongs in the list of all definitions of properties. Therefore, the property of being Richardian is assigned some integer, n . For example, the definition "being Richardian" might be assigned to the number 92. Finally, the paradox becomes: Is 92 Richardian? Suppose 92 is Richardian. This is only possible if 92 does not have the property designated by the defining expression which it is correlated with. In other words, this means 92 is not Richardian, contradicting our assumption. However, if we suppose 92 is not Richardian, then it does have the defining property which it corresponds to. This, by definition, means that it is Richardian, again contrary to assumption. Thus, the statement "92 is Richardian" cannot consistently be designated as either true or false.
Another opinion concerning Richard's paradox relates to mathematical predicativism . By this view, the real numbers are defined in stages, with each stage only making reference to previous stages and other things that have already been defined. From a predicative viewpoint it is not valid to quantify over all real numbers in the process of generating a new real number, because this is believed to result in a circularity problem in the definitions. Set theories such as ZFC are not based on this sort of predicative framework, and allow impredicative definitions.
Richard (1905) presented a solution to the paradox from the viewpoint of predicativism. Richard claimed that the flaw of the paradoxical construction was that the expression for the construction of the real number r does not actually define a real number unambiguously, because the statement refers to the construction of an infinite set of real numbers, of which r itself is a part. Thus, Richard says, the real number r will not be included as any r n , because the definition of r does not meet the criteria for being included in the sequence of definitions used to construct the sequence r n . Contemporary mathematicians agree that the definition of r is invalid, but for a different reason. They believe the definition of r is invalid because there is no well-defined notion of when an English phrase defines a real number, and so there is no unambiguous way to construct the sequence r n .
Although Richard's solution to the paradox did not gain favor with mathematicians, predicativism is an important part of the study of the foundations of mathematics . Predicativism was first studied in detail by Hermann Weyl in Das Kontinuum , wherein he showed that much of elementary real analysis can be conducted in a predicative manner starting with only the natural numbers . More recently, predicativism has been studied by Solomon Feferman , who has used proof theory to explore the relationship between predicative and impredicative systems. [ 1 ] | https://en.wikipedia.org/wiki/Richard's_paradox |
Richard "Dick" A. Andersen (November 16, 1942 – June 16, 2019) [ 4 ] was a professor of chemistry at the University of California, Berkeley , and faculty senior scientist at the chemical sciences division of Lawrence Berkeley National Laboratory . [ 5 ]
Born in Oklahoma in 1942, Richard Allan Andersen was raised and educated in the small town of Yankton , South Dakota . [ 6 ] He obtained his bachelor's degree in 1965 from the University of South Dakota . [ 5 ] Andersen pursued graduate studies at the University of Wyoming , working under the supervision of Professor Geoffrey Coates . [ 4 ] [ 6 ] Andersen was Coates' last student. [ 6 ] In 1973, Andersen earned his Ph.D. with several fundamental organometallic and alkoxide compounds of beryllium. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Andersen then spent a year as postdoctoral researcher at the Oslo Centre for Industrial Research. [ 13 ] On the day it was announced that Geoffrey Wilkinson and Ernst O. Fisher would share the 1973 Nobel Prize in Chemistry , Andersen received an offer to conduct his postdoctoral research in Wilkinson's laboratory at Imperial College London . [ 5 ] [ 6 ] Andersen took up this post a few months later, in 1974. [ 13 ] In June 1976 he joined the faculty at the University of California, Berkeley's department of chemistry. [ 4 ] He remained a professor in the department until his death in 2019. [ 4 ]
Andersen was also active in teaching throughout his career, and was well-known for teaching from the primary inorganic chemistry literature, [ 14 ] as well as his hands-on approach to teaching undergraduate laboratory courses. [ 4 ] [ 5 ]
Andersen began his independent research career at UC Berkeley in 1976. Initially his research focused on ligand substitution patterns in quadruply-bonded Mo 2 complexes. [ 6 ] He also studied actinide coordination complexes bearing the sterically bulky amido ligand –N(SiMe 3 ) 2 , including the uranium(III) compound U[N(SiMe 3 ) 2 ] 3, [ 6 ] [ 15 ] which was later found to have pyramidal geometry. [ 16 ]
Andersen was awarded many Visiting Professorships around the world, including appointments in Sevilla, Lyon, Montpellier, New South Wales, and Zurich. [ 6 ] He was also an Alexander von Humboldt Professor in various locations in Germany (1994). [ 6 ] [ 17 ] Andersen was also a member of the Royal Chemical Society , American Chemical Society , and Sigma Xi . [ 17 ] | https://en.wikipedia.org/wiki/Richard_A._Andersen_(chemist) |
Richard B. Wells (born 1953) was a Professor Emeritus at the University of Idaho in Moscow, Idaho . From 2006 until his retirement in 2013, he held concurrent appointments as Professor of Electrical and Computer Engineering, Professor of Neuroscience, Adjunct Professor of Philosophy, and Adjunct Professor of Materials Science & Engineering. He was named as Senior Member of the Institute of Electrical and Electronics Engineers in 2001. [ 3 ]
Wells holds a B.S. degree in electrical engineering from Iowa State University in Ames, Iowa , where he graduated with distinction in May 1975. He received his M.S. in electrical engineering in May 1979 from Stanford University and his Ph.D. in electrical engineering in May 1985 from the University of Idaho. [ 3 ] | https://en.wikipedia.org/wiki/Richard_B._Wells |
Richard F. W. Bader FRSC FCIC (October 15, 1931 – January 15, 2012) was a Canadian quantum chemist , noted for his work on the atoms in molecules theory. This theory attempts to establish a physical basis for many of the working concepts of chemistry , such as atoms in molecules and bonding, in terms of the topology of the electron density function in three-dimensional space. [ 1 ] Alongside the eminent chemist Ronald Gillespie , he had a significant influence on inorganic chemistry education in Canada.
He was born in 1931 in Kitchener, Ontario , Canada. His parents were Albert Bader and Alvina Bader, who immigrated from Switzerland. [ 2 ] His father was a butcher at Burns Pride of Canada and his mother was a housekeeper at Kitchener Hospital of Waterloo. [ 2 ] He received a scholarship from McMaster University that allowed him to earn a BSC in 1953. His father was his best supporter who encouraged him and taught him to "never quit" his education and his dream. [ 3 ] He finished his master's degree in science at McMaster University in 1955. He obtained a PhD (1958) from the Massachusetts Institute of Technology (MIT). He did postdoctoral work at MIT and the University of Cambridge . He was appointed assistant professor in the Department of Chemistry at the University of Ottawa in 1959 and promoted to associate professor in 1962. He moved to McMaster University as associate professor in 1963, became full Professor in 1966 and was Emeritus Professor until 1996. [ 4 ]
He was elected a Fellow of the Royal Society of Canada in 1980. [ 1 ] [ 5 ] He was a fellow of the Chemical Institute of Canada. [ 5 ] Bader has received the John Simon Guggenheim Memorial Fellowship. [ 5 ] Bader was elected a Grand Fellow of the MIRCE Academy, Exeter, UK, in 2010. [ 6 ] Over his long career, he published 223 refereed articles and book chapters about chemistry and physics. [ 5 ] Bader's works in recent years are cited more than 3000 times per year.
Richard Bader discovered that electron density is very important in explaining the behavior of atoms in the molecules. [ 3 ] According to his theory, there are no atomic orbitals in the molecules. This was a new idea and went against accepted theories. He fought hard for his revolutionary ideas and found it difficult to publish. [ 3 ] In the end, the theories became more accepted and published a book Atoms in Molecules, a Quantum Theory in 1991. [ 1 ] [ 7 ] Bader said: 'We had a lot of deep discussions, and it started to occur to me that chemistry was in a real bind because we had this very powerful molecular structure hypothesis that came from the cauldron of experimental physics. But everyone had their own dictionary - different people had a different idea of what a bond was. We were trying to do science with everyone using their own private dictionary. I decided that when I left, I would make it my goal to find the physical basis of chemistry.' [ 4 ] Bader helped create the widely used software program, AIMPAC, that predicts the property of molecules based on the atoms in that molecule. [ 4 ]
Bader married Pamela Kozenof, a nurse from New Zealand, in 1958. [ 2 ] They had three daughters, Carolyn, Kimberly and Suzanne. [ 5 ] He had one grandson, Alexander. [ 5 ] | https://en.wikipedia.org/wiki/Richard_Bader |
Richard C. Willson is a professor of chemical and biomolecular engineering at the University of Houston noted for his work on the development of purification, detection, and measurement technologies for applications in pharmaceutical manufacturing, process control, and medical diagnostics. [ 1 ]
Willson received B.S. and M.S. degrees in chemical engineering from Caltech in 1981 and 1982, respectively. He moved to the Massachusetts Institute of Technology for his doctoral work, where he worked under Charles L. Cooley and Richard C. Reid and received his PhD in 1998.
Willson joined the Department of Chemical Engineering (now the William A. Brookshire Department of Chemical & Biomolecular Engineering) at the University of Houston in 1988, where he is currently the Huffington-Woestemeyer Professor.
He has developed methods to detect viruses and other biothreats based on the materials underpinning reflective vests, [ 2 ] glow-in-the-dark stars, [ 3 ] and glow sticks. [ 4 ] In 2024, he led a project on antibody measurement as part of a grant from the National Institute for Innovation in Manufacturing Biopharmaceuticals. [ 5 ]
His most-cited research article established the technique of combinatorial screening for catalysts. [ 6 ] Other highly-cited research articles report DNA aptamer binding to vascular endothelial growth factor [ 7 ] and develop luminescent nanoparticles as reporters in a sensitive lateral flow assay . [ 8 ]
He is a member of the editorial boards of Biotechnology Progress and PLOS One . He contributed episodes on chromatography [ 9 ] and avocado seeds [ 10 ] to Engines of Our Ingenuity . | https://en.wikipedia.org/wiki/Richard_C._Willson |
Julius Wilhelm Richard Dedekind ( / ˈ d eɪ d ɪ k ɪ n d / ; [ 1 ] German: [ˈdeːdəˌkɪnt] ; 6 October 1831 – 12 February 1916) was a German mathematician who made important contributions to number theory , abstract algebra (particularly ring theory ), and the axiomatic foundations of arithmetic . His best known contribution is the definition of real numbers through the notion of Dedekind cut . He is also considered a pioneer in the development of modern set theory and of the philosophy of mathematics known as logicism .
Dedekind's father was Julius Levin Ulrich Dedekind, an administrator of Collegium Carolinum in Braunschweig . His mother was Caroline Henriette Dedekind (née Emperius), the daughter of a professor at the Collegium. [ 2 ] Richard Dedekind had three older siblings. As an adult, he never used the names Julius Wilhelm. He was born in Braunschweig (often called "Brunswick" in English), which is where he lived most of his life and died. His body rests at Braunschweig Main Cemetery .
He first attended the Collegium Carolinum in 1848 before transferring to the University of Göttingen in 1850. There, Dedekind was taught number theory by professor Moritz Stern . Gauss was still teaching, although mostly at an elementary level, and Dedekind became his last student. Dedekind received his doctorate in 1852, for a thesis titled Über die Theorie der Eulerschen Integrale ("On the Theory of Eulerian integrals "). This thesis did not display the talent evident in Dedekind's subsequent publications.
At that time, the University of Berlin , not Göttingen , was the main facility for mathematical research in Germany. Thus Dedekind went to Berlin for two years of study, where he and Bernhard Riemann were contemporaries; they were both awarded the habilitation in 1854. Dedekind returned to Göttingen to teach as a Privatdozent , giving courses on probability and geometry . He studied for a while with Peter Gustav Lejeune Dirichlet , and they became good friends. Because of lingering weaknesses in his mathematical knowledge, he studied elliptic and abelian functions . Yet he was also the first at Göttingen to lecture concerning Galois theory . About this time, he became one of the first people to understand the importance of the notion of groups for algebra and arithmetic .
In 1858, he began teaching at the Polytechnic school in Zürich (now ETH Zürich). When the Collegium Carolinum was upgraded to a Technische Hochschule (Institute of Technology) in 1862, Dedekind returned to his native Braunschweig, where he spent the rest of his life, teaching at the Institute. He retired in 1894, but did occasional teaching and continued to publish. He never married, instead living with his sister Julia.
Dedekind was elected to the Academies of Berlin (1880) and Rome, and to the French Academy of Sciences (1900). He received honorary doctorates from the universities of Oslo , Zurich , and Braunschweig .
While teaching calculus for the first time at the Polytechnic school, Dedekind developed the notion now known as a Dedekind cut (German: Schnitt ), now a standard definition of the real numbers. The idea of a cut is that an irrational number divides the rational numbers into two classes ( sets ), with all the numbers of one class (greater) being strictly greater than all the numbers of the other (lesser) class. For example, the square root of 2 defines all the nonnegative numbers whose squares are less than 2 and the negative numbers into the lesser class, and the positive numbers whose squares are greater than 2 into the greater class. Every location on the number line continuum contains either a rational or an irrational number. Thus there are no empty locations, gaps, or discontinuities. Dedekind published his thoughts on irrational numbers and Dedekind cuts in his pamphlet "Stetigkeit und irrationale Zahlen" ("Continuity and irrational numbers"); [ 3 ] in modern terminology, Vollständigkeit , completeness .
Dedekind defined two sets to be "similar" when there exists a one-to-one correspondence between them. [ 4 ] He invoked similarity to give the first [ 5 ] precise definition of an infinite set : a set is infinite when it is "similar to a proper part of itself," [ 6 ] in modern terminology, is equinumerous to one of its proper subsets . Thus the set N of natural numbers can be shown to be similar to the subset of N whose members are the squares of every member of N , ( N → N 2 ):
Dedekind's work in this area anticipated that of Georg Cantor , who is commonly considered the founder of set theory . Likewise, his contributions to the foundations of mathematics anticipated later works by major proponents of logicism , such as Gottlob Frege and Bertrand Russell .
Dedekind edited the collected works of Lejeune Dirichlet , Gauss , and Riemann . Dedekind's study of Lejeune Dirichlet's work led him to his later study of algebraic number fields and ideals . In 1863, he published Lejeune Dirichlet's lectures on number theory as Vorlesungen über Zahlentheorie ("Lectures on Number Theory") about which it has been written that:
Although the book is assuredly based on Dirichlet's lectures, and although Dedekind himself referred to the book throughout his life as Dirichlet's, the book itself was entirely written by Dedekind, for the most part after Dirichlet's death.
The 1879 and 1894 editions of the Vorlesungen included supplements introducing the notion of an ideal, fundamental to ring theory . (The word "Ring", introduced later by Hilbert , does not appear in Dedekind's work.) Dedekind defined an ideal as a subset of a set of numbers, composed of algebraic integers that satisfy polynomial equations with integer coefficients. The concept underwent further development in the hands of Hilbert and, especially, of Emmy Noether . Ideals generalize Ernst Eduard Kummer 's ideal numbers , devised as part of Kummer's 1843 attempt to prove Fermat's Last Theorem . (Thus Dedekind can be said to have been Kummer's most important disciple.) In an 1882 article, Dedekind and Heinrich Martin Weber applied ideals to Riemann surfaces , giving an algebraic proof of the Riemann–Roch theorem .
In 1888, he published a short monograph titled Was sind und was sollen die Zahlen? ("What are numbers and what are they good for?" Ewald 1996: 790), [ 7 ] which included his definition of an infinite set . He also proposed an axiomatic foundation for the natural numbers, whose primitive notions were the number one and the successor function . The next year, Giuseppe Peano , citing Dedekind, formulated an equivalent but simpler set of axioms , now the standard ones.
Dedekind made other contributions to algebra . For instance, around 1900, he wrote the first papers on modular lattices . In 1872, while on holiday in Interlaken , Dedekind met Georg Cantor . Thus began an enduring relationship of mutual respect, and Dedekind became one of the first mathematicians to admire Cantor's work concerning infinite sets, proving a valued ally in Cantor's disputes with Leopold Kronecker , who was philosophically opposed to Cantor's transfinite numbers . [ 8 ]
Primary literature in English:
Primary literature in German:
dehdehkhind is the way a German would pronounce Dedekind.
There is an online bibliography of the secondary literature on Dedekind. Also consult Stillwell's "Introduction" to Dedekind (1996). | https://en.wikipedia.org/wiki/Richard_Dedekind |
Richard Dronskowski (born 11 November 1961, in Brilon ) is a German chemist and physicist. He is a full professor at the RWTH Aachen University .
Dronskowski studied chemistry and physics at the University of Münster from 1981 to 1986. [ 1 ] He completed his chemistry diploma with Bernt Krebs and Arndt Simon in 1987. [ 1 ] He finished his physics diploma with Ole Krogh Andersen and Johannes Pollmann in 1989. [ 1 ] He received his doctorate under supervision of Arndt Simon at the University of Stuttgart . [ 1 ] From 1991 to 1992, he was a visiting scientist in the group of Roald Hoffmann at Cornell University . [ 2 ] In 1995, he finished his habilitation at the University of Dortmund . Since 1997, he is a full professor at the RWTH Aachen University . [ 1 ]
His research focuses on the following topics: [ 2 ] | https://en.wikipedia.org/wiki/Richard_Dronskowski |
Richard Alexander Gibbs , AC , is an Australian geneticist. He is currently the Wofford Cain Chair and Professor of Molecular and Human Genetics at Baylor College of Medicine in Houston, Texas. [ 1 ]
In 1996, he founded the Human Genome Sequencing Center at BCM, which was one of five worldwide sites selected to complete the final phase of the Human Genome Project . [ 2 ]
This article about an Australian scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Richard_Gibbs_(biologist) |
Richard K. Wilson (born March 23, 1959) is a molecular geneticist. He is the founding Executive Director of the Institute for Genomic Medicine at Nationwide Children’s Hospital and Professor of Pediatrics at the Ohio State University College of Medicine .
He received his A.B. degree (Microbiology) from Miami University in 1981, his Ph.D. (Chemistry) from the University of Oklahoma in 1986, and was a Research Fellow in the Division of Biology at the California Institute of Technology (1986-1990).
In 1990, Wilson joined the faculty of Washington University School of Medicine where he co-founded the Genome Sequencing Center/ McDonnell Genome Institute . [ 1 ] At Washington University, Wilson was the Alan A. and Edith L. Wolff Distinguished Professor of Medicine, Professor of Genetics, Professor of Molecular Microbiology, and a member of the Senior Leadership Committee of the Siteman Cancer Center . Wilson’s laboratories have been among the world’s leaders in genome analysis. His teams have sequenced and analyzed billions of bases of DNA from the genomes of bacteria, yeast, plants, invertebrates, vertebrates, primates and humans . Wilson and his colleagues at Washington University sequenced the first animal genome – that of the roundworm Caenorhabditis elegans [ 2 ] – and contributed substantially to the sequencing and analysis of the human genome. [ 3 ] Following the Human Genome Project, they also sequenced the genomes of the mouse, chimpanzee , orangutan , gorilla , rhesus macaque, platypus, the plants Arabidopsis thaliana and Zea mays (corn), as well as various invertebrates, insect vectors and microorganisms. His team was the first to sequence the genome of a cancer patient and discover genetic signatures relevant to the pathogenesis of the disease. [ 4 ] Building upon these achievements, Wilson and his colleagues helped initiate and participated in several landmark genomics projects including The Cancer Genome Atlas , [ 5 ] the Pediatric Cancer Genome Project , [ 6 ] the Genome Reference Consortium , the Human Microbiome Project , and the Centers for Common Disease Genomics . [ 7 ] He is the most cited author of Nature . [ 8 ]
In 2008, Dr. Wilson was elected as a Fellow of the American Association for the Advancement of Science (2008). [ 9 ] In 2011, he received the Distinguished Achievement Award from Miami University [ 10 ] and the Distinguished Alumnus Award from the University of Oklahoma College of Arts and Sciences . [ 11 ]
In 2016, Dr. Wilson and several colleagues moved to Nationwide Children’s Hospital and launched the Institute for Genomic Medicine. Their mission is to utilize cutting-edge genome sequencing and analysis technology to discover clues that will lead to more effective diagnosis and treatment of cancer and other human diseases in children and adults. | https://en.wikipedia.org/wiki/Richard_K._Wilson |
Richard B. Kaner is an American synthetic inorganic chemist. He is a distinguished professor and the Dr. Myung Ki Hong Endowed Chair in Materials Innovation [ 1 ] at the University of California, Los Angeles , where he holds a joint appointment in the Department of Chemistry and Biochemistry and the Department of Material Science and Engineering. [ 2 ] [ 3 ] Kaner conducts research on conductive polymers ( polyaniline ), superhard materials and carbon compounds, such as fullerenes and graphene . [ 4 ]
He has been on the board of directors for California NanoSystems Institute . [ 5 ] Kaner is Chief Scientific Adviser to Hydrophilix, Nanotech Energy, and Supermetalix, university spin-off companies .
Kaner was an adjunct professor at the Royal Melbourne Institute of Technology in Australia in 2010. [ 6 ] He was the Eka-Granules Lecturer at the University of Tasmania , and was a visiting professor at the University of Wollongong . [ 3 ] He is an associate editor of the Materials Research Bulletin. [ 3 ] | https://en.wikipedia.org/wiki/Richard_Kaner |
Fellow National Academy of Sciences
Richard Lawrence "Larry" Edwards is an American geochemist and Distinguished McKnight University Professor and Regents Professor at the University of Minnesota . [ 1 ] He is one of the most cited and respected geochemists in the world, [ 2 ] and is well-known for his contributions to modernizing the uranium-thorium (Th-230) radiometric dating technique. [ 2 ]
Edwards earned his Ph.D from the California Institute of Technology in 1988 after studying under Gerald J. Wasserburg . [ 3 ] His thesis, entitled High Precision Thorium-230 Ages of Corals and the Timing of Sea Level Fluctuations in the Late Quaternary , discusses the usage of Th-230 dating in the examination of corals at Santo and Malekula Islands, Vanuatu . [ 3 ]
Edwards has made notable contributions to anthropology through dating a jawbone at 100000 years old, suggesting that modern humans had inhabited the area of China where the bone was found earlier than previously thought. [ 4 ] His collaborations with geochemist Hai Cheng have led to the largest number of environmental science papers published in Nature Index journals by a pair of geochemists. [ 5 ]
On January 3rd, 2025, Dr. Edwards received the highly prestigious National Medal of Science. [ 6 ]
This biographical article about an Earth scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Richard_Lawrence_Edwards |
Richard Michael Durbin FRS [ 17 ] (born 1960) [ 15 ] is a British computational biologist [ 18 ] [ 19 ] [ 2 ] and Al-Kindi Professor of Genetics at the University of Cambridge . [ 20 ] [ 21 ] [ 22 ] [ 23 ] He also serves as an associate faculty member at the Wellcome Sanger Institute where he was previously a senior group leader. [ 24 ] [ 25 ] [ 26 ] [ 27 ]
Durbin was educated at The Hall School, Hampstead [ citation needed ] and Highgate School in London. [ 15 ] After competing in the 1978/9 International Mathematical Olympiad , [ 28 ] he went on to study at the University of Cambridge graduating in 1982 [ 29 ] with a second class honours degree in the Cambridge Mathematical Tripos . After graduating, he continued to study for a PhD [ 3 ] at St John's College, Cambridge [ 15 ] studying the development and organisation of the nervous system of Caenorhabditis elegans whilst working at the Laboratory of Molecular Biology (LMB) in Cambridge, supervised by John Graham White . [ 3 ]
Durbin's early work included developing the primary instrument software for one of the first X-ray crystallography area detectors [ 30 ] and the MRC Biorad confocal microscope, alongside contributions to neural modelling. [ 31 ] [ 32 ]
He then led the informatics for the Caenorhabditis elegans genome project, [ 33 ] and alongside Jean Thierry-Mieg developed the genome database AceDB , which evolved into the WormBase web resource. Following this he played an important role in data collection for and interpretation of the human genome sequence. [ 34 ]
He has developed numerous methods for computational sequence analysis . [ 35 ] [ 36 ] These include gene finding (e.g. GeneWise) with Ewan Birney [ 37 ] and Hidden Markov models for protein and nucleic acid alignment and matching (e.g. HMMER ) with Sean Eddy and Graeme Mitchison. A standard textbook Biological Sequence analysis coauthored with Sean Eddy , Anders Krogh and Graeme Mitchison [ 16 ] describes some of this work. Using these methods Durbin worked with colleagues to build a series of important genomic data resources, including the protein family database Pfam , [ 38 ] the genome database Ensembl , [ 39 ] and the gene family database TreeFam . [ 9 ]
More recently Durbin has returned to sequencing and has developed low coverage approaches to population genome sequencing, applied first to yeast, [ 40 ] [ 41 ] and has been one of the leaders in the application of new sequencing technology to study human genome variation. [ 42 ] [ 43 ] Durbin currently co-leads the international 1000 Genomes Project to characterise variation down to 1% allele frequency as a foundation for human genetics.
Durbin was a joint winner of the Mullard Award of the Royal Society in 1994 (for work on the confocal microscope ), won the Lord Lloyd of Kilgerran Award of the Foundation for Science and Technology in 2004, and was elected a Fellow of the Royal Society (FRS) in 2004 [ 17 ] and a member of the European Molecular Biology Organization (EMBO) in 2009. The Royal Society awarded its Gabor Medal to Durbin in 2017 for his contributions to computational biology. [ 44 ] In 2023 he received the International Prize for Biology for his work on the Biology of Genomes.
Durbin's certificate of election for the Royal Society reads:
Durbin is distinguished for his powerful contribution to computational biology. In particular, he played a leading role in establishing the new field of bioinformatics. This allows the handling of biological data on an unprecedented scale, enabling genomics to prosper. He led the analysis of the C. elegans genome, and with Thierry-Mieg developed the database software AceDB . In the international genome project he led the analysis of protein coding genes. He introduced key computational tools in software and data handling. His Pfam database allowed the identification of domains in new protein sequences ; it used hidden Markov models to which approach generally he brought rigour and which led to covariance models for RNA sequence. [ 45 ]
Durbin is the son of James Durbin and is married to Julie Ahringer , a scientist at the Gurdon Institute . They have two children. [ 15 ] | https://en.wikipedia.org/wiki/Richard_M._Durbin |
Richard M. Murray is a synthetic biologist and Thomas E. and Doris Everhart Professor of Control & Dynamical Systems and Bioengineering at Caltech , California. [ 1 ] [ 2 ] He was elected to the National Academy of Engineering in 2013 for "contributions in control theory and networked control systems with applications to aerospace engineering, robotics, and autonomy". [ 3 ] Murray is a co-author of several textbooks on feedback and control systems, and helped to develop the Python Control Systems Library to provide operations for use in feedback control systems. [ 4 ] He was a founding member of the Department of Defense's Defense Innovation Advisory Board as of 2016. [ 5 ]
Murray received a BS in electrical engineering from California Institute of Technology (Caltech) in 1985. He received a MS (1988) and PhD (1990) from the University of California, Berkeley . [ 3 ] [ 6 ]
Murray joined Caltech in 1991 as an assistant professor of mechanical engineering. He became an associate professor in 1997, a professor in 2000, and the Everhart Professor of Control and Dynamical Systems in 2006. He was named the Everhart Professor of Control and Dynamical Systems and Bioengineering in 2009. He has served as Chair of the Division of Engineering and Applied Science (2000–2005) and Director of Information Science and Technology (2006–2009). [ 3 ]
Murray is a pioneer of the field of biological engineering , synthetic biology and control theory [ 7 ] [ 8 ] including feedback in networked control systems , biomolecular feedback, engineered biological circuits, and novel architectures. [ 3 ] [ 9 ]
Murray is a founder and steering group member of the Build-a-Cell Initiative, an international collaboration investigating creation of synthetic live cells. [ 10 ] [ 11 ] [ 12 ] [ 13 ] He is a co-founder of Tierra Biosciences, for cell-free synthetic biology. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Richard_M._Murray |
Richard M. Myers (born March 24, 1954) is an American geneticist and biochemist known for his work on the Human Genome Project (HGP) . The National Human Genome Research Institute says the HGP “[gave] the world a resource of detailed information about the structure, organization and function of the complete set of human genes.” [ 1 ] Myers' genome center, in collaboration with the Joint Genome Institute , contributed more than 10 percent of the data in the project. [ 2 ]
As of July 1, 2022, Myers is Chief Scientific Officer and President Emeritus of the HudsonAlpha Institute for Biotechnology, a non-profit research institute. [ 3 ] Before that, Myers was President and Science Director of the Institute. [ 4 ] He was previously the chair of the department of genetics at Stanford University and director of the Stanford Human Genome Center. [ 5 ]
His research focuses on the human genome with the goal of understanding how allelic variation and gene expression changes contribute to human traits, including diseases, behaviors and other phenotypes.
Myers was born in Selma, Alabama in 1954 and moved to Tuscaloosa, Alabama at age 10. He attended college at the University of Alabama where he earned his bachelor's degree in biochemistry. He then went to graduate school at the University of California at Berkeley, earning his Ph.D. [ 6 ] in 1982 in the laboratory of Dr. Robert Tjian. After that, Myers spent almost four years as a postdoctoral fellow in the lab of Dr. Tom Maniatis at Harvard University, where he studied human gene regulation. Some new technologies he developed in Maniatis's lab exposed him to the field of human genetics, and much of his work since then has involved developing and using genomics and genetic tools to understand basic human biology and disease.
On behalf of Maniatis, Myers attended a conference sponsored by the U.S. Department of Energy and the International Commission for Protection Against Environmental Mutagens and Carcinogens. 19 researchers, including Myers, met at an Alta, Utah ski resort In December 1984 with a challenge in mind. The group explored how scientists might determine whether the atomic bombs dropped on Hiroshima and Nagasaki during World War II increased the rate of mutations in the sperm or eggs of survivors. Scientists believed the high doses of radiation could have increased the germline mutation rate, but they would need methods to measure mutations in the children of survivors. Those methods would allow them to compare the number of mutations in offspring compared to exposed survivors of the bomb blasts. [ 7 ]
Myers recalls a conference attendee saying, “The rate is so low that we would have to sequence the entire human genome to know the answer.” [ 8 ] Not long after, the Department of Energy proposed the basis for the Human Genome Project, which began in earnest in 1990. [ 9 ]
In 1985, Myers set up a lab at the University of California in San Francisco. His team sought to understand globin gene expression, studying both cis- and trans-acting components that regulate transcription of the gene. [ 10 ] His team also worked on finding the mutated gene present in people with Huntington disease. [ 11 ] Myers teamed up with Dr. David Cox, a medical geneticist with a Ph.D. in yeast and human genetics. [ 12 ] Their laboratories worked together for 15 years, on multiple technologies for gene hunting, including radiation hybrid mapping, a method that uses high-energy x-rays to fragment human chromosomes that were recovered in somatic cell hybrids and then used to determine the locations of DNA markers in the human genome. [ 13 ] In 1990 they established a human genome center, which they moved to Stanford University in early 1993. [ 14 ] The pair continued working on inherited human diseases. They found the genes for an inherited form of childhood progressive epilepsy (EPM1), [ 15 ] a gene important for a key step in development of the cerebellum (the “weaver” gene), [ 16 ] and others. Over the course of those years, the Myers lab created and worked with mouse models for Huntington disease and EPM1. [ 17 ] [ 18 ]
Cox and Myers in their work at the Stanford Human Genome Center collaborated with the Joint Genome Institute (JGI) in Walnut Creek to sequence the first human genome. That collaboration led to maps of the entire genome, which played a key role in piecing the entire sequence together for the Human Genome Project (HGP). The Stanford group generated genome-wide maps as well as finishing of sequences of three human chromosomes, and their collaboration with JGI contributed 11% of the finished human genome. [ 19 ]
Biotech visionaries Jim Hudson and Lonnie McMillian founded the HudsonAlpha Institute for Biotechnology in 2008, recruiting Myers to lead the Institute as president and science director. [ 20 ]
The Myers Lab at HudsonAlpha studies the human genome, with a focus on allelic variation and how gene expression changes contribute to human traits, including disease. His group utilizes a number of high-throughput genomic methods, including DNA sequencing, genotyping, chromatin immunoprecipitation, mRNA expression profiling, transcriptional promoter and DNA methylation measurements. The lab also uses computational and statistical tools for identifying, characterizing and understanding how functional elements of the genome work together at the molecular level. Researchers in the Myers Lab use these and other state-of-the-art methods to explore how genomes are involved in brain disorders, ALS, cancer, children born with developmental disorders, autoimmune diseases and other traits. [ 21 ] | https://en.wikipedia.org/wiki/Richard_M._Myers |
Richard Muther (November 20, 1913 – October 15, 2014) was an American consulting engineer, faculty member at MIT, and author. He developed fundamental techniques used in plant layout, material handling, and other aspects of industrial engineering . [ 1 ] He was also known as "Mister Systematic". [ 2 ]
Muther was born 1913 in Newton, Massachusetts to Lorenz Francis Muther and Josephine (Ashleman) Muther. After attending the University of Wisconsin , he obtained his B.S. and M.S. from the Massachusetts Institute of Technology . [ 2 ]
In his early career Muther worked for the Methods Engineering Council in Pittsburgh, the consulting firm of Harold Bright Maynard . In 1956 he founded his own consulting firm, Richard Muther and Associates, [ 3 ] and worked as consulting engineer for organizations, such as Vendo in Kansas City, General Dynamics , Philips in the Netherlands, John Deere , and in the People's Republic of China for its Department of Energy. [ 2 ]
In World War II Muther served in the US Navy as expediter and facilities planning officer. In 1944 he published his first book, entitled Production Line Technique, on mass production methods, based on studies at over 75 industrial plants. [ 2 ]
Over the years Muther had taught at Massachusetts Institute of Technology , at the Naval Postgraduate School , and at Robert College in Turkey. He was visiting professor and instructor at the University of Missouri–Kansas City , and in Europe at the ETH Zurich in Switzerland, and at the Royal Institute of Technology in Sweden. [ 2 ]
Muther received an honorary doctorate ScD(hc) from Lund University in Sweden. He also was awarded the Gilbreth Medal (outstanding contributions to Industrial Engineering) in 1962; [ 4 ] the Materials Handling Award (SAM); the Engineering Citation Award (SME); the Honor Award (IMMS); the Reed-Apple Award (MHIA); and the Donald Francis Award (for outstanding contributions to international material handling.)
Muther was the original developer of relationship chart (REL-CHART) and its companion space-relationship diagram. This tool is the basis for many other techniques which are used to optimize the proximity of related functions and minimize unnecessary transportation in industrial facilities.
Muther also created the Mag Count method of measuring the difficulty of handling (transporting) any solid material prior to knowing how it will be moved. He developed the industry-standard color code used to classify industrial space and the related type-of-work symbols. Corresponding black-and-white hatch patterns based on the heraldic tincture code are also part of his methodology. He is considered the "Father of Systematic Planning". | https://en.wikipedia.org/wiki/Richard_Muther_(industrial_engineer) |
Richard P. Simmons (born May 3, 1931) is an American metallurgist.
Simmons was born on May 3, 1931. He attended Massachusetts Institute of Technology (MIT) in 1949. He became a metallurgist at the Allegheny Ludlum Steel corporation in Pittsburgh . [ 1 ] He then worked for other steel companies before returning to become the chief executive of Allegheny, taking it public and then leading a management buyout . Having become wealthy, he then created a variety of endowments at MIT. [ 2 ]
Simmons was also chairman of the Pittsburgh Symphony Orchestra between 1989 and 1997. He returned as chairman in 2003 until his retirement in 2015. [ 3 ]
This industry -related article is a stub . You can help Wikipedia by expanding it .
This article about an American businessperson born in the 1930s is a stub . You can help Wikipedia by expanding it .
This article about a chief executive from the United States is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Richard_P._Simmons |
Richard Robert Ernst (14 August 1933 – 4 June 2021) was a Swiss physical chemist and Nobel laureate. [ 2 ]
Ernst was awarded the Nobel Prize in Chemistry in 1991 for his contributions towards the development of Fourier transform nuclear magnetic resonance (NMR) spectroscopy [ 3 ] while at Varian Associates and ETH Zurich . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] These underpin applications to both to chemistry with NMR spectroscopy and to medicine with magnetic resonance imaging (MRI). [ 1 ]
He humbly referred to himself as a "tool-maker" rather than a scientist. [ 9 ]
Ernst was born in Winterthur , Switzerland on 14 August 1933 [ 10 ] to Robert Ernst and Irma Ernst-Brunner. [ 11 ] He was the oldest of three children of Irma Brunner and Robert Ernst. He grew up in a house built in 1898 by his grandfather, who was a merchant. [ 12 ] During his childhood, he was interested in music, playing the violoncello and even considering a career as a musical composer. At 13-years old, Ernst stumbled upon a box of chemicals belonging to his late uncle, a metallurgical engineer. [ 13 ] Young Ernst was excited by what he found, and set about trying all conceivable reactions, some of which resulted in explosions that terrified his parents. [ 9 ]
He enrolled in the Eidgenössische Technische Hochschule (ETH) in Zurich to study chemistry and received his diploma in 1957 as a “Diplomierter Ingenieur Chemiker''. [ 14 ] He was disappointed in the course content, so conducted further research and taught himself quantum mechanics and thermodynamics in his spare time. [ 9 ] After a break to complete his military service, Ernst earned his Ph.D. in physical chemistry in 1962 [ 15 ] from ETH Zurich. [ 16 ] His dissertation was on nuclear magnetic resonance in the field of physical chemistry. [ 10 ]
Ernst entered Varian Associates as a scientist in 1963 and invented Fourier transform NMR, noise decoupling, and a number of other methods. He returned to ETH Zurich in 1968 and became a lecturer. His career developed into assistant professor in 1970 and associate professor in 1972. From 1976, Richard R. Ernst was Full Professor of Physical Chemistry. [ 17 ]
Ernst led a research group dedicated to magnetic resonance spectroscopy, and was the director of the Physical Chemistry Laboratory at the ETH Zurich. He developed two-dimensional NMR and several novel pulse techniques. He retired in 1998. He participated in the development of medical magnetic resonance tomography, as well as the NMR structure determination of biopolymers in solution collaborating with Professor Kurt Wüthrich . He also participated in the study of intra-molecular dynamics. [ 17 ]
Ernst was a foreign fellow of the Estonian Academy of Sciences (elected 2002), [ 18 ] the US National Academy of Sciences , the Royal Academy of Sciences , London , the German National Academy of Sciences Leopoldina , the Russian Academy of Sciences , the Korean Academy of Science and Technology and Bangladesh Academy of Sciences . [ 19 ] [ 20 ] [ 21 ] He was elected a Foreign Member of the Royal Society (ForMemRS) in 1993 . [ 1 ] He was awarded the John Gamble Kirkwood Medal in 1989. [ 22 ]
In 1991, Ernst was on an aeroplane flying over the Atlantic when he discovered he had been awarded The Nobel Prize in Chemistry. He was invited into the cockpit, where he was given a radio to talk to the Nobel committee. Here they told him he was being honoured "for his contributions to the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy". [ 9 ] [ 23 ]
Ernst was a member of the World Knowledge Dialogue Scientific Board. He was awarded the Marcel Benoist Prize in 1986, the Wolf Prize in Chemistry in 1991, [ 24 ] and Louisa Gross Horwitz Prize of Columbia University in 1991. [ 20 ] [ 25 ] He was also awarded the Tadeus Reichstein Medal in 2000 [ 26 ] and the Order of the Star of Romania in 2004. [ 27 ] He also held Honorary Doctorates from the Technical University of Munich , EPF Lausanne , University of Zurich , University Antwerpen , Babes-Bolyai University , and University Montpellier . [ 20 ]
The 2009 Bel Air Film Festival featured the world premiere of a documentary film on Ernst Science Plus Dharma Equals Social Responsibility . Produced by Carlo Burton , the film takes place in Ernst's hometown in Switzerland. [ 28 ] In 2022, another movie about Richard R. Ernst premiered at the Cameo cinema in Winterthur, produced by Lukas Schwarzenbacher and Susanne Schmid. The documentary contains a retrospective of Richard R. Ernsts life, which was filmed only a few months before his death. [ 29 ]
Ernst was married to Magdalena until his death. [ 30 ] Together, they had three children: Anna Magdalena, Katharina Elisabeth and Hans-Martin Walter. [ 10 ] Besides toiling with his work, Ernst also enjoyed music and art, specifically Tibetan scroll art. Using scientific techniques, Ernst would research the pigments on the scrolls to learn about their geographic origin and age. [ 11 ]
Ernst died on 4 June 2021 in Winterthur at the age of 87. [ 30 ] [ 24 ] | https://en.wikipedia.org/wiki/Richard_R._Ernst |
Richard S. Potember is an American scientist and inventor. He is currently a principal systems engineer at MITRE . Prior to this he was a program manager in the Tactical Technology Office at the Defense Advanced Research Projects Agency ( DARPA ). He has been an instructor at the Whiting School of Engineering at the Johns Hopkins University since 1987. He was a member of the principal professional staff at the Johns Hopkins Applied Physics Laboratory , Laurel, Maryland, from 1981 to 2015. He was an adjunct professor at The Paul H. Nitze School of Advanced International Studies from 1995 to 1998.
Potember was born in Boston , Massachusetts . He completed his B.S. in chemistry from Merrimack College in 1975 and his Ph.D. from Johns Hopkins University in chemistry in 1979, where his adviser was Dwaine O. Cowan . He completed his postdoctoral fellowship at the Johns Hopkins Applied Physics Laboratory (APL) in 1980. He received an M.S. in technical management from the Whiting School of Engineering , Johns Hopkins University in 1986. [ citation needed ]
Potember was first known for his groundbreaking work in molecular electronics . [ 1 ] [ 2 ] He invented the first two-terminal molecular non-volatile memory or memristor [ citation needed ] as well as an optical disc technology [ citation needed ] that can store multiple bits of information at one location. He also co-invented a sol-gel processed switchable vanadium(IV) oxide thin film coating for energy conservation applications. [ citation needed ]
Potember's recent achievements have focused on biotechnology and biomedical engineering . He performed pioneering work that demonstrated individual living nerve cells can be grown into controlled geometric patterns on substrates and these neurons can form true synaptic connections. [ 3 ] that can be used to destroy viruses, bacteria and spores real-time in ventilated air, and in heating or air conditioning systems. [ citation needed ]
Potember has also conducted research and development in the areas of time-of-flight mass spectrometry [ citation needed ] and solid propellants . [ citation needed ]
Potember has two sons and lives with his wife in Maryland. | https://en.wikipedia.org/wiki/Richard_S._Potember |
Richard Griffith Seed (May 22, 1928 – November 17, 2013) was an American physicist and businessman best known for forcing a national debate on human cloning in the late 1990s. [ 1 ]
Seed was born in Chicago, Illinois on May 22, 1928. He graduated from Oak Park and River Forest High School in Illinois before attending Harvard University , earning his undergraduate degree there in 1949. [ 2 ] He later received a master's degree, as well as a Ph.D. in physics from Harvard in 1953. His interests soon shifted to the new frontier of biomedicine . In the 1970s Seed co-founded a company that commercialized a technique for transferring embryos in cattle. Later, he and his brother, Chicago surgeon Randolph Seed, started another company, Fertility & Genetics Research Inc., to help infertile women conceive children using the same technique. [ 3 ] These efforts to transplant a human embryo from one woman to an infertile surrogate mother were published in 1984. The cumbersome procedure involved flushing embryos out of the uterus of the egg donor—and was soon eclipsed by in-vitro fertilization . Ultimately the venture failed.
On December 5, 1997, Seed announced that he planned to clone a human being before any federal laws could be enacted to ban the process. [ 4 ] Seed's announcement added fuel to the raging ethical debate on human cloning that had been sparked by Ian Wilmut 's creation of Dolly the sheep , the first clone obtained from adult cells. [ 5 ] Seed's plans were to use the same technique used by the Scottish team. [ 6 ] Seed's announcement went against President Clinton's 1997 proposal for a voluntary private moratorium against human cloning. In the media frenzy that followed, the story of a 69-year-old eccentric, and maverick scientist emerged, but Seed possessed impressive credentials and was not dismissed immediately. [ 7 ] While virtually no mainstream scientist believed Seed would succeed, there began a subtle shift in attitudes after Seed made his announcement. [ 8 ]
Retired at the time of his announcement to clone the first human, Seed was reported to have dabbled in ill-fated ventures in the past. He claimed at one time to have commitments for $800,000 toward a goal of $2.5 million needed to clone the first human before 2000. Seed first said that he was going to make little baby clones for infertile couples. Later, "to defuse criticism that I'm taking advantage of desperate women," he announced that he would first clone himself. Still later he announced that he would re-create his wife Gloria. "God made man in his own image," he told National Public Radio correspondent Joe Palca in December 1997. "God intended for man to become one with God. Cloning, is the first serious step in becoming one with God." In a later interview on CNN, Seed elaborated: "Man," he said, "will develop the technology and the science and the capability to have an indefinite life span."
Seed was awarded the 1998 Ig Nobel Prize in economics, [ 9 ] and a performance titled The Seedy Opera debuted at the event. [ 10 ]
Richard Seed died in Hobart, Indiana on November 17, 2013, at the age of 85. [ 11 ] | https://en.wikipedia.org/wiki/Richard_Seed |
Richard Tenguerian ( Armenian : Տիգրան Թընկըրեան ; born, August 3, 1955) is an architectural model maker of Armenian descent. Some of his notable physical models include the Kingdom Center in Riyadh (1998), Yankee Stadium in New York (2006), The Sail @ Marina Bay in Singapore (2007), and Comcast Center in Philadelphia (2008). More recently, he has made architectural models for the New Tappan Zee Bridge in New York (2013), CBS Sixty Minutes , and the Hudson Yards Redevelopment Project in New York (2014). [ 1 ] He is the founding principal of Tenguerian Models and resides in New York City.
Richard Tenguerian was born in Aleppo , Syria . His parents are survivors of the Armenian genocide [ citation needed ] . His father, Antranig Tenguerian, was a sculptor and his mother, Mary Tenguerian, was a fashion designer who worked for Chanel Studio in Lebanon [ citation needed ] .
At age 14, while attending school, Richard entered a summer internship program with one of Lebanon's leading architectural firms, where he made his first model. He applied to the American University in Beirut , but the 1975 Lebanese Civil War thwarted his intentions.
At age 21, Richard visited a relative in New York and was intrigued by the opportunities the city offered [ citation needed ] . He began working as a model maker, and built a reputation in the architectural community, which helped him finance his undergraduate education. He earned a bachelor's degree in architecture from the Pratt Institute . [ 2 ]
Encouraged by the high demand for model makers in the architectural community, Richard started his own company, becoming a pioneer in the field [ citation needed ] . Early on in his career, he was contacted by the legendary architect Philip Johnson , who sought his services, thus paving his way to greater accomplishments [ citation needed ] .
In addition to various projects in the United States, he has created models for buildings in Dubai, Kuwait, Saudi Arabia, Singapore, Indonesia and England. He has also developed models for clients in Russia, South America, the Fiji Island and the Bahamas. [ 3 ] | https://en.wikipedia.org/wiki/Richard_Tenguerian |
Richard Vyškovský (13 July 1929 – 1 August 2019) [ 1 ] [ 2 ] was a Czech architect and creator of paper models.
In 1960s Vyškovský worked for the State institute for reconstruction of historic towns and monuments (SÚRPMO) in Prague . After a discussion with colleagues, complaining about the limited availability of die-cast Matchbox scale cars sold in the Czechoslovakia only in Tuzex shops, he decided to create similar models from paper. [ 3 ] His first model was designed after a die-cast Packard Landaulet model manufactured by Lesney Products under code Y-11 in the series Models of Yesteryear . [ 3 ] In October 1968 the newspaper Lidová demokracie published a short article about " ing. Blecha and his colleague" accompanied by photos of paper models Mercedes 38/220 and Packard Landaulet. [ 4 ]
The state-owned publishing house SNDK and Vladislav Toman, editor in chief of ABC magazine, showed interest in publishing paper models by Blecha/Vyškovský. [ 3 ] Already in 1968 SNDK published their paper diorama " Hussite siege of Karlštejn ". [ 5 ] ABC Magazine published a simplified paper model of a Packard Landaulet in March 1969. [ 6 ] In the following seven years Blecha and Vyškovský published in ABC and SNDK/Albatros a series of further paper models, including a vast model of Prague Castle . From 1976 Vyškovský designed the models for ABC and Albatros on his own. The first published self model was Formula 1 Ferrari 312-T2 of Niki Lauda for ABC. [ 7 ] Since 1997 his models have also been published by the company ERKO typ , which is co-owned by his son Richard. [ 8 ] | https://en.wikipedia.org/wiki/Richard_Vyškovský |
Richard Zach is a Canadian logician , philosopher of mathematics , and historian of logic and analytic philosophy . He is currently Professor of Philosophy at the University of Calgary .
Zach's research interests include the development of formal logic and historical figures ( Hilbert , Gödel , and Carnap ) associated with this development. In the philosophy of mathematics Zach has worked on Hilbert's program and the philosophical relevance of proof theory . In mathematical logic , he has made contributions to proof theory ( epsilon calculus , proof complexity ) and to modal and many-valued logic , especially Gödel logic . [ 1 ]
Zach received his undergraduate education at the Vienna University of Technology and his Ph.D. at the Group in Logic and the Methodology of Science at the University of California, Berkeley . His dissertation, Hilbert's Program: Historical, Philosophical, and Metamathematical Perspectives , was jointly supervised by Paolo Mancosu and Jack Silver . [ 2 ]
He has taught at the University of Calgary since 2001, and holds the rank of Professor. He has held visiting appointments at the University of California, Irvine [ 3 ] and McGill University . [ 4 ] Zach is a founding editor of the Review of Symbolic Logic and the Journal for the Study of the History of Analytic Philosophy , and is also associate editor of Studia Logica , and a subject editor for the Stanford Encyclopedia of Philosophy (History of Modern Logic). [ 5 ] He serves on the editorial boards of the Bernays edition [ 6 ] and the Carnap edition. [ 7 ] He was elected to the Council of the Association for Symbolic Logic in 2008 [ 8 ] (ASL) and he has served on the ASL Committee on Logic Education [ 9 ] and the executive committee of the Kurt Gödel Society . [ 10 ] | https://en.wikipedia.org/wiki/Richard_Zach |
Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for,
if Z ( s ) {\displaystyle Z(s)} is a positive-real function (PRF) then R ( s ) {\displaystyle R(s)} is a PRF for all real, positive values of k {\displaystyle k} . [ 1 ]
The theorem has applications in electrical network synthesis . The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s.
where Z ( s ) {\displaystyle Z(s)} is a PRF, k {\displaystyle k} is a positive real constant, and s = σ + i ω {\displaystyle s=\sigma +i\omega } is the complex frequency variable, can be written as,
where,
Since Z ( s ) {\displaystyle Z(s)} is PRF then
is also PRF. The zeroes of this function are the poles of W ( s ) {\displaystyle W(s)} . Since a PRF can have no zeroes in the right-half s -plane , then W ( s ) {\displaystyle W(s)} can have no poles in the right-half s -plane and hence is analytic in the right-half s -plane.
Let
Then the magnitude of W ( i ω ) {\displaystyle W(i\omega )} is given by,
Since the PRF condition requires that r ( ω ) ≥ 0 {\displaystyle r(\omega )\geq 0} for all ω {\displaystyle \omega } then | W ( i ω ) | ≤ 1 {\displaystyle \left|W(i\omega )\right|\leq 1} for all ω {\displaystyle \omega } . The maximum magnitude of W ( s ) {\displaystyle W(s)} occurs on the i ω {\displaystyle i\omega } axis because W ( s ) {\displaystyle W(s)} is analytic in the right-half s -plane. Thus | W ( s ) | ≤ 1 {\displaystyle |W(s)|\leq 1} for σ ≥ 0 {\displaystyle \sigma \geq 0} .
Let W ( s ) = u ( σ , ω ) + i v ( σ , ω ) {\displaystyle W(s)=u(\sigma ,\omega )+iv(\sigma ,\omega )} , then the real part of R ( s ) {\displaystyle R(s)} is given by,
Because W ( s ) ≤ 1 {\displaystyle W(s)\leq 1} for σ ≥ 0 {\displaystyle \sigma \geq 0} then ℜ ( R ( s ) ) ≥ 0 {\displaystyle \Re (R(s))\geq 0} for σ ≥ 0 {\displaystyle \sigma \geq 0} and consequently R ( s ) {\displaystyle R(s)} must be a PRF. [ 2 ]
Richards' theorem can also be derived from Schwarz's lemma . [ 3 ]
The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term PRF was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis . [ 4 ] Richards gave the theorem in his 1947 paper in the reduced form, [ 5 ]
that is, the special case where k = 1 {\displaystyle k=1}
The theorem (with the more general casse of k {\displaystyle k} being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. [ 6 ] In the Bott-Duffin synthesis, Z ( s ) {\displaystyle Z(s)} represents the electrical network to be synthesised and R ( s ) {\displaystyle R(s)} is another (unknown) network incorporated within it ( R ( s ) {\displaystyle R(s)} is unitless, but R ( s ) Z ( k ) {\displaystyle R(s)Z(k)} has units of impedance and R ( s ) / Z ( k ) {\displaystyle R(s)/Z(k)} has units of admittance). Making Z ( s ) {\displaystyle Z(s)} the subject gives
Since Z ( k ) {\displaystyle Z(k)} is merely a positive real number, Z ( s ) {\displaystyle Z(s)} can be synthesised as a new network proportional to R ( s ) {\displaystyle R(s)} in parallel with a capacitor all in series with a network proportional to the inverse of R ( s ) {\displaystyle R(s)} in parallel with an inductor. By a suitable choice for the value of k {\displaystyle k} , a resonant circuit can be extracted from R ( s ) {\displaystyle R(s)} leaving a function Z ′ ( s ) {\displaystyle Z'(s)} two degrees lower than Z ( s ) {\displaystyle Z(s)} . The whole process can then be applied iteratively to Z ′ ( s ) {\displaystyle Z'(s)} until the degree of the function is reduced to something that can be realised directly. [ 7 ]
The advantage of the Bott-Duffin synthesis is that, unlike other methods, it is able to synthesise any PRF. Other methods have limitations such as only being able to deal with two kinds of element in any single network. Its major disadvantage is that it does not result in the minimal number of elements in a network. The number of elements grows exponentially with each iteration. After the first iteration there are two Z ′ {\displaystyle Z'} and associated elements, after the second, there are four Z ″ {\displaystyle Z''} and so on. [ 8 ]
Hubbard notes that Bott and Duffin appeared not to know the relationship of Richards' theorem to Schwarz's lemma and offers it as his own discovery, [ 9 ] but it was certainly known to Richards who used it in his own proof of the theorem. [ 10 ] | https://en.wikipedia.org/wiki/Richards'_theorem |
The Richards equation represents the movement of water in unsaturated soils, and is attributed to Lorenzo A. Richards who published the equation in 1931. [ 1 ] It is a quasilinear partial differential equation ; its analytical solution is often limited to specific initial and boundary conditions. [ 2 ] Proof of the existence and uniqueness of solution was given only in 1983 by Alt and Luckhaus . [ 3 ] The equation is based on Darcy-Buckingham law [ 1 ] representing flow in porous media under variably saturated conditions, which is stated as
where
Considering the law of mass conservation for an incompressible porous medium and constant liquid density, expressed as
where
Then substituting the fluxes by the Darcy-Buckingham law the following mixed-form Richards equation is obtained:
For modeling of one-dimensional infiltration this divergence form reduces to
Although attributed to L. A. Richards, the equation was originally introduced 9 years earlier by Lewis Fry Richardson in 1922. [ 5 ] [ 6 ]
The Richards equation appears in many articles in the environmental literature because it describes the flow in the vadose zone between the atmosphere and the aquifer. It also appears in pure mathematical journals because it has non-trivial solutions. The above-given mixed formulation involves two unknown variables: θ {\displaystyle \theta } and h {\displaystyle h} . This can be easily resolved by considering constitutive relation θ ( h ) {\displaystyle \theta (h)} , which is known as the water retention curve . Applying the chain rule , the Richards equation may be reformulated as either h {\displaystyle h} -form (head based) or θ {\displaystyle \theta } -form (saturation based) Richards equation.
By applying the chain rule on temporal derivative leads to
where d θ d h {\displaystyle {\frac {{\textrm {d}}\theta }{{\textrm {d}}h}}} is known as the retention water capacity C ( h ) {\displaystyle C(h)} . The equation is then stated as
The head-based Richards equation is prone to the following computational issue: the discretized temporal derivative using the implicit Rothe method yields the following approximation:
Δ θ Δ t ≈ C ( h ) Δ h Δ t , and so Δ θ Δ t − C ( h ) Δ h Δ t = ε . {\displaystyle {\frac {\Delta \theta }{\Delta t}}\approx C(h){\frac {\Delta h}{\Delta t}},\quad {\mbox{and so}}\quad {\frac {\Delta \theta }{\Delta t}}-C(h){\frac {\Delta h}{\Delta t}}=\varepsilon .}
This approximation produces an error ε {\displaystyle \varepsilon } that affects the mass conservation of the numerical solution, and so special strategies for temporal derivatives treatment are necessary. [ 7 ]
By applying the chain rule on the spatial derivative leads to
where K ( h ) d h d θ {\displaystyle \mathbf {K} (h){\frac {{\textrm {d}}h}{{\textrm {d}}\theta }}} , which could be further formulated as K ( θ ) C ( θ ) {\displaystyle {\frac {\mathbf {K} (\theta )}{C(\theta )}}} , is known as the soil water diffusivity D ( θ ) {\displaystyle \mathbf {D} (\theta )} . The equation is then stated as
The saturation-based Richards equation is prone to the following computational issues. Since the limits lim θ → θ s | | D ( θ ) | | = ∞ {\displaystyle \lim _{\theta \to \theta _{s}}||\mathbf {D} (\theta )||=\infty } and lim θ → θ r | | D ( θ ) | | = ∞ {\displaystyle \lim _{\theta \to \theta _{r}}||\mathbf {D} (\theta )||=\infty } , where θ s {\displaystyle \theta _{s}} is the saturated (maximal) water content and θ r {\displaystyle \theta _{r}} is the residual (minimal) water content a successful numerical solution is restricted just for ranges of water content satisfactory below the full saturation (the saturation should be even lower than air entry value ) as well as satisfactory above the residual water content. [ 8 ]
The Richards equation in any of its forms involves soil hydraulic properties, which is a set of five parameters representing soil type. The soil hydraulic properties typically consist of water retention curve parameters by van Genuchten: [ 9 ] ( α , n , m , θ s , θ r {\displaystyle \alpha ,\,n,\,m,\,\theta _{s},\theta _{r}} ), where α {\displaystyle \alpha } is the inverse of air entry value [L −1 ], n {\displaystyle n} is the pore size distribution parameter [-], and m {\displaystyle m} is usually assumed as m = 1 − 1 n {\displaystyle m=1-{\frac {1}{n}}} . Further the saturated hydraulic conductivity K s {\displaystyle \mathbf {K} _{s}} (which is for non isotropic environment a tensor of second order) should also be provided. Identification of these parameters is often non-trivial and was a subject of numerous publications over several decades. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
The numerical solution of the Richards equation is one of the most challenging problems in earth science. [ 16 ] Richards' equation has been criticized for being computationally expensive and unpredictable [ 17 ] [ 18 ] because there is no guarantee that a solver will converge for a particular set of soil constitutive relations. Advanced computational and software solutions are required here to over-come this obstacle. The method has also been criticized for over-emphasizing the role of capillarity, [ 19 ] and for being in some ways 'overly simplistic' [ 20 ] In one dimensional simulations of rainfall infiltration into dry soils, fine spatial discretization less than one cm is required near the land surface, [ 21 ] which is due to the small size of the representative elementary volume for multiphase flow in porous media. In three-dimensional applications the numerical solution of the Richards equation is subject to aspect ratio constraints where the ratio of horizontal to vertical resolution in the solution domain should be less than about 7. [ citation needed ] | https://en.wikipedia.org/wiki/Richards_equation |
In mathematics , Richardson's theorem establishes the undecidability of the equality of real numbers defined by expressions involving integers , π , ln 2 , and exponential and sine functions. It was proved in 1968 by the mathematician and computer scientist Daniel Richardson of the University of Bath .
Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π , the number ln 2 , the variable x , the operations of addition, subtraction, multiplication, composition , and the sin , exp , and abs functions.
For some classes of expressions generated by other primitives than in Richardson's theorem, there exist algorithms that can determine whether an expression is zero. [ 1 ]
Richardson's theorem can be stated as follows: [ 2 ] Let E be a set of expressions that represent R → R {\displaystyle \mathbb {R} \to \mathbb {R} } functions. Suppose that E includes these expressions:
Suppose E is also closed under a few standard operations. Specifically, suppose that if A and B are in E , then all of the following are also in E :
Then the following decision problems are unsolvable:
After Hilbert's tenth problem was solved in 1970, B. F. Caviness observed that the use of e x and ln 2 could be removed. [ 3 ] Wang later noted that under the same assumptions under which the question of whether there was x with A ( x ) < 0 was insolvable, the question of whether there was x with A ( x ) = 0 was also insolvable. [ 4 ]
Miklós Laczkovich removed also the need for π and reduced the use of composition. [ 5 ] In particular, given an expression A ( x ) in the ring generated by the integers, x , sin x n , and sin( x sin x n ) (for n ranging over positive integers), both the question of whether A ( x ) > 0 for some x and whether A ( x ) = 0 for some x are unsolvable.
By contrast, the Tarski–Seidenberg theorem says that the first-order theory of the real field is decidable, so it is not possible to remove the sine function entirely. | https://en.wikipedia.org/wiki/Richardson's_theorem |
The Richardson number ( Ri ) is named after Lewis Fry Richardson (1881–1953). [ 1 ] It is the dimensionless number that expresses the ratio of the buoyancy term to the flow shear term: [ 2 ]
where g {\displaystyle g} is gravity , ρ {\displaystyle \rho } is density, u {\displaystyle u} is a representative flow speed, and z {\displaystyle z} is depth.
The Richardson number, or one of several variants, is of practical importance in weather forecasting and in investigating density and turbidity currents in oceans, lakes, and reservoirs.
When considering flows in which density differences are small (the Boussinesq approximation ), it is common to use the reduced gravity g' and the relevant parameter is the densimetric Richardson number [ further explanation needed ]
which is used frequently when considering atmospheric or oceanic flows [ citation needed ] .
If the Richardson number is much less than unity, buoyancy is unimportant
in the flow. If it is much greater than unity, buoyancy is dominant (in
the sense that there is insufficient kinetic energy to homogenize the fluids).
If the Richardson number is of order unity, then the flow is likely to
be buoyancy-driven: the energy of the flow derives from the potential energy in the system originally.
In aviation , the Richardson number is used as a rough measure of expected air turbulence. A lower value indicates a higher degree of turbulence. Values in the range 10 to 0.1 are typical [ citation needed ] , with values below unity indicating significant turbulence.
In thermal convection problems, Richardson number represents the importance of natural convection relative to the forced convection . The Richardson number in this context is defined as
where g is the gravitational acceleration, β {\displaystyle \beta } is the thermal expansion coefficient , T hot is the hot wall temperature, T ref is the reference temperature, L is the characteristic length, and V is the characteristic velocity.
The Richardson number can also be expressed by using a combination of the Grashof number and Reynolds number ,
Typically, the natural convection is negligible when Ri < 0.1, forced convection is negligible when Ri > 10, and neither is negligible when 0.1 < Ri < 10. It may be noted that usually the forced convection is large relative to natural convection except in the case of extremely low forced flow velocities. However, buoyancy often plays a significant role in defining the laminar–turbulent transition of a mixed convection flow. [ 3 ] In the design of water filled thermal energy storage tanks, the Richardson number can be useful. [ 4 ]
In atmospheric science, several different expressions for the Richardson number are commonly used: the flux Richardson number (which is fundamental), the gradient Richardson number, and the bulk Richardson number.
where T v {\displaystyle T_{v}} is the virtual temperature , θ v {\displaystyle \theta _{v}} is the virtual potential temperature , z {\displaystyle z} is the altitude, u {\displaystyle u} is the x {\displaystyle x} component of the wind, v {\displaystyle v} is the y {\displaystyle y} component of the wind, and w {\displaystyle w} is the z {\displaystyle z} (vertical) component of the wind. A prime (e.g. w ′ {\displaystyle w'} ) denotes a deviation of the respective field from its Reynolds average .
Here, for any variable f {\displaystyle f} , Δ f := f z 1 − f z 0 {\displaystyle \Delta f:=f_{z1}-f_{z0}} , i.e. the difference between f {\displaystyle f} at altitude z 1 {\displaystyle z1} and altitude z 0 {\displaystyle z0} . If the lower reference level is taken to be z 0 = 0 {\displaystyle z0=0} , then u z 0 = v z 0 = 0 {\displaystyle u_{z0}=v_{z0}=0} (due to the no-slip boundary condition ), so the expression simplifies to:
In oceanography , the Richardson number has a more general form [ citation needed ] which takes stratification into account. It is a measure of relative importance of mechanical and density effects in the water column, as described by the Taylor–Goldstein equation , used to model Kelvin–Helmholtz instability which is driven by sheared flows.
where N is the Brunt–Väisälä frequency and u the wind speed.
The Richardson number defined above is always considered positive. A negative value of N² (i.e. complex N ) indicates unstable density gradients with active convective overturning. Under such circumstances the magnitude of negative Ri is not generally of interest. It can be shown that Ri < 1/4 is a necessary condition for velocity shear to overcome the tendency of a stratified fluid to remain stratified, and some mixing (turbulence) will generally occur. When Ri is large, turbulent mixing across the stratification is generally suppressed. [ 8 ] | https://en.wikipedia.org/wiki/Richardson_number |
Richmann's law , [ 1 ] [ 2 ] sometimes referred to as Richmann's rule , [ 3 ] Richmann's mixing rule , [ 4 ] Richmann's rule of mixture [ 5 ] or Richmann's law of mixture , [ 6 ] is a physical law for calculating the mixing temperature when pooling multiple bodies. [ 5 ] It is named after the Baltic German physicist Georg Wilhelm Richmann , who published the relationship in 1750, establishing the first general equation for calorimetric calculations . [ 7 ] [ 8 ]
Through experimental measurements, Wilhelm Richmann determined that the following relationship holds when water of different temperatures is mixed: [ 9 ]
It follows:
Here m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} are the masses of the two mixture components, T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} are their respective initial temperatures, and T m {\displaystyle T_{m}} is the mixture temperature.
This observation is called Richmann's law in the narrower sense and applies in principle to all substances of the same state of aggregation. [ 1 ] [ 9 ] According to this, the mixing temperature is the weighted arithmetic mean of the temperatures of the two initial components.
Richmann's rule of mixing can also be applied in reverse, for example, to the question of the ratio in which quantities of water of given temperatures must be mixed to obtain water of a desired temperature. Determining the quantities m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} required for this purpose, given a total quantity M = m 1 + m 2 {\displaystyle M=m_{1}+m_{2}} , is accomplished with the mixing cross. The corresponding formula, obtained from the above equation by rearrangement, is:
For the mixing ratio, this gives:
The physical background of the mixing rule is the fact that the heat energy of a substance is directly proportional to its mass and its absolute temperature . The proportionality factor is the specific heat capacity , which depends on the nature of the substance, but which was not described until some time after Richmann's discovery by Joseph Black . Thus, the validity of the formula is limited to mixtures of the same substance, since it assumes a uniform specific heat capacity. [ 9 ] Another condition is that both components be uniformly warm everywhere and that there be no appreciable heat exchange with their other surroundings.
If one wants to mix two substances with different - but known - specific heat capacities, one can formulate the mixing rule more generally, as shown below.
Under the condition that no change of aggregate state occurs and the system is closed, i.e., in particular, there is no heat exchange with the environment, the following holds:
Where h 1 ( T ) {\displaystyle h_{1}(T)} and h 2 ( T ) {\displaystyle h_{2}(T)} represent the specific enthalpy of the respective components.
If the specific heat capacities c 1 {\displaystyle c_{1}} and c 2 {\displaystyle c_{2}} can be assumed to be constant, this can be transformed to.
The formula resolved by the mixture temperature is then:
In a wider sense this equation is also referred to as Richmann's law because it simply extends Richmann's established relationship to include the specific heat capacity , thus allowing the calculation of the mixing temperature of different substances. [ 2 ] [ 5 ] [ 10 ]
If the heat capacities are not constant over the entire temperature range, the above formula can be used with an average heat capacity for component i {\displaystyle i} :
In this formula, c i ( T ) {\displaystyle c_{i}(T)} with i = 1 {\displaystyle i=1} or 2 {\displaystyle 2} represents the specific heat capacity of the two components, which may be temperature dependent. Application of the formula may require an iterative procedure to determine the mixture temperature, since the average heat capacity is also temperature dependent. | https://en.wikipedia.org/wiki/Richmann's_law |
The Richtmyer–Meshkov instability (RMI) occurs when two fluids of different density are impulsively accelerated. Normally this is by the passage of a shock wave . The development of the instability begins with small amplitude perturbations which initially grow linearly with time. This is followed by a nonlinear regime with bubbles appearing in the case of a light fluid penetrating a heavy fluid, and with spikes appearing in the case of a heavy fluid penetrating a light fluid. A chaotic regime eventually is reached and the two fluids mix. This instability can be considered the impulsive-acceleration limit of the Rayleigh–Taylor instability . [ 1 ]
For ideal MHD
( ω 2 − 2 k ∥ 2 / β ) ( ω 4 − ( 2 / β + 1 ) k 2 ω 2 + 2 k ∥ 2 k 2 / β ) = 0 {\displaystyle (\omega ^{2}-2k_{\parallel }^{2}/\beta )(\omega ^{4}-(2/\beta +1)k^{2}\omega ^{2}+2k_{\parallel }^{2}k^{2}/\beta )=0}
For Hall MHD
( ω 2 − 2 k ∥ 2 / β ) ( ω 4 − ( 2 / β + 1 ) k 2 ω 2 + 2 k ∥ 2 k 2 / β ) − 2 d s 2 k ∥ 2 k 2 ω 2 ( ω 2 − k 2 ) / β = 0 {\displaystyle {\displaystyle (\omega ^{2}-2k_{\parallel }^{2}/\beta )(\omega ^{4}-(2/\beta +1)k^{2}\omega ^{2}+2k_{\parallel }^{2}k^{2}/\beta )-2d_{s}^{2}k_{\parallel }^{2}k^{2}\omega ^{2}(\omega ^{2}-k^{2})/\beta =0}}
For QMHD
( ( 1 + 2 / β c 2 ) ω 2 − 2 k ∥ 2 / β ) ( ( 1 + 2 / β c 2 ) ω 4 − ( 2 / β + 1 ) k 2 ω 2 + 2 k ∥ 2 k 2 / β ) − 2 d s 2 k ∥ 2 k 2 ω 2 ( ω 2 − k 2 ) / β = 0 {\displaystyle {\displaystyle {\displaystyle ((1+2/\beta c^{2})\omega ^{2}-2k_{\parallel }^{2}/\beta )((1+2/\beta c^{2})\omega ^{4}-(2/\beta +1)k^{2}\omega ^{2}+2k_{\parallel }^{2}k^{2}/\beta )-2d_{s}^{2}k_{\parallel }^{2}k^{2}\omega ^{2}(\omega ^{2}-k^{2})/\beta =0}}}
R. D. Richtmyer provided a theoretical prediction, [ 2 ] and E. E. Meshkov (Евгений Евграфович Мешков) ( ru ) provided experimental verification. [ 3 ] Materials in the cores of stars, like Cobalt-56 from Supernova 1987A were observed earlier than expected. This was evidence of mixing due to Richtmyer–Meshkov and Rayleigh–Taylor instabilities . [ 4 ]
During the implosion of an inertial confinement fusion target, the hot shell material surrounding the cold D – T fuel layer is shock-accelerated. This instability is also seen in magnetized target fusion (MTF). [ 5 ] Mixing of the shell material and fuel is not desired and efforts are made to minimize any tiny imperfections or irregularities which will be magnified by RMI.
Supersonic combustion in a scramjet may benefit from RMI as the fuel-oxidants interface is enhanced by the breakup of the fuel into finer droplets. Also in studies of deflagration to detonation transition (DDT) processes show that RMI-induced flame acceleration can result in detonation. | https://en.wikipedia.org/wiki/Richtmyer–Meshkov_instability |
Ride height or ground clearance is the amount of space between the base of an automobile tire and the lowest point of the automobile, typically the bottom exterior of the differential housing (even though the lower shock mounting point may be lower); or, more properly, to the shortest distance between a flat, level surface, and the lowest part of a vehicle other than those parts designed to contact the ground (such as tires, tracks, skis, etc.). Ground clearance is measured with standard vehicle equipment, and for cars, is usually given with no cargo or passengers.
Ground clearance is a critical factor in several important characteristics of a vehicle. For all vehicles, especially cars, variations in clearance represent a trade-off between handling , ride quality , and practicality.
A higher ride height and ground clearance means that the wheels have more vertical room to travel and absorb road shocks. Also, the car is more capable of being driven on surfaces that are not level, without the scraping against surface obstacles and possibly damaging the chassis and underbody.
For a higher ride height, the center of mass of the car is higher, which makes for less precise and more dangerous handling characteristics (most notably, the chance of rollover is higher). Higher ride heights will typically adversely affect aerodynamic properties. This is why sports cars typically have very low clearances, while off-road vehicles and SUVs have higher ones.
A road car usually has a ride height around 16–17 cm (6.3–6.7 in), while an SUV usually lies around 19–22 cm (7.5–8.7 in). Two well-known extremes are the Ferrari F40 with a 12.5 cm (4.9 in) ride height [ 1 ] and the Hummer H1 with a 40.64 cm (16.0 in) ride height. [ citation needed ]
The table below provides average ride height for different car types which were available on the market in India in 2020: [ 2 ]
Some cars have used underslung frames to achieve a lower ride height and the consequent improvement in center of gravity. The 1905-14 cars of the American Motor Car Company are one example. [ 3 ]
Self-leveling suspension systems are designed to maintain a constant ride height regardless of load. The suspension detects the load via mechanical or electronic means and raises or lowers the vehicle, by inflating cylinders in the suspension to lift the chassis higher. [ 4 ] Vehicles not equipped with self-leveling will pitch down at one end when laden; this adversely affects ride, handling, and aerodynamic properties.
Some modern automobiles (such as the Audi Allroad Quattro and Tesla Model S ) have height adjustable suspension , which can vary the ride height by adjusting the hydropneumatic suspension or air suspension . This adjustment can be automatic, depending on road conditions, and/or the settings selected by the driver. Many buses include a kneeling feature to make it easier for passengers to board and alight, and a ferry lift feature to increase the ground clearance, which is useful for steep ground transitions, such as driving onto ferries, as the name suggests.
Other, simpler suspension systems, such as coilover springs, offer a way of manually adjusting ride height (and often, spring stiffness) by compressing the spring in situ , using a threaded shaft and adjustable knob or nut.
Lowering a car's suspension is a common and relatively inexpensive aftermarket modification. Many car enthusiasts prefer the more aggressive look of a lowered body, [ according to whom? ] and there is an easily realized car handling improvement from the lower center of gravity . Most passenger cars are produced such that one or two inches of lowering will not significantly increase the probability of damage. On most automobiles, ride height is modified by changing the length of the suspension springs , and is the essence of many aftermarket suspension kits supplied by manufacturers such as KW , Eibach , [ 5 ] and H&R . [ 6 ] For trucks, lifted trucks are popular with truck owners, who often upsize their wheels and tires when lifting their vehicles.
For armored fighting vehicles (AFV), ground clearance presents an additional factor in a vehicle's overall performance: a lower ground clearance means that the vehicle minus the chassis is lower to the ground and thus harder to spot and harder to hit. The final design of any AFV reflects a compromise between being a smaller target on one hand, and having greater battlefield mobility on the other. Very few AFVs have top speeds at which car-like handling becomes an issue, though rollovers can and do occur. By contrast, an AFV is far more likely to need high ground clearance than a road vehicle.
18-wheel tractor-trailers also have to take the ground clearance of both their tractor and especially trailer into consideration on certain areas of uneven terrain, such as raised railroad crossings . Their extremely long wheelbase means that such terrain could potentially catch the undercarriage of the trailer in the wide space between the axles, potentially leaving the truck stuck with no means to extricate itself.
In some areas buses are required to have a ground clearance of at least 100 mm ( 3 + 15 ⁄ 16 in). [ 7 ] Too much ride height can cause the vehicle to have an excessively high center of gravity , which could cause the vehicle to be unstable or even flip .
Colloquially referred to as differential clearance or diff clearance. Distance from bottom exterior of axle housing or bottom exterior of differential housing, whichever is lower, to the ground. [ citation needed ]
Distance between bottom of suspension components to ground. In vehicles with independent suspension this is typically the distance between the bottom of the lower control arm and the ground. [ citation needed ]
Distance between the bottom of the lowest sprung mass and the ground. [ citation needed ] | https://en.wikipedia.org/wiki/Ride_height |
Ridge-post framing is an old type of timber framing .
The ridge board of their roof is not carried by king posts based on tie beams, but the ridge posts are based on the ground work. The German term for this construction is Firstständerhaus . The free-standing posts in the interior of the house and the posts in the gable or lateral walls were originally called Firstsäule ("ridge columns"). On a purlin roof the ridge posts carry the ridge purlin. On the latter are hung the sloping rafters to which the roof is fixed. This type of Firstständerhaus was predominantly built around the 15th century in Baden region. | https://en.wikipedia.org/wiki/Ridge-post_framing |
Ridges ( r egions of i ncrease d g ene e xpression) are domains of the genome with a high gene expression ; the opposite of ridges are antiridges . The term was first used by Caron et al. in 2001. [ 1 ] Characteristics of ridges are: [ 1 ]
Clustering of genes in prokaryotes was known for a long time. Their genes are grouped in operons , genes within operons share a common promoter unit. These genes are mostly functionally related. The genome of prokaryotes is relatively very simple and compact. In eukaryotes the genome is huge and only a small amount of it are functionally genes, furthermore the genes are not arranged in operons. Except for nematodes and trypanosomes ; although their operons are different from the prokaryotic operons. In eukaryotes each gene has a transcription regulation site of its own. Therefore, genes don't have to be in close proximity to be co-expressed. Therefore, it was long assumed that eukaryotic genes were randomly distributed across the genome due to the high rate of chromosome rearrangements. But because the complete sequence of genomes became available it became possible to absolutely locate a gene and measure its distance to other genes.
The first eukaryote genome ever sequenced was that of Saccharomyces cerevisiae , or budding yeast, in 1996. Half a year after that Velculescu et al. (1997) published a research in which they had integrated SAGE data with the now available genome map. During a cell cycle different genes are active in a cell. Therefore, they used SAGE data from three moments of the cell cycle (log phase, S phase -arrested and G2 / M -phase arrested cells). Because in yeast all genes have a promoter unit of their own it was not suspected that genes would cluster near to each other but they did. Clusters were present on all 16 yeast chromosomes. [ 2 ] A year later Cho et al. also reported (although in more detail) that certain genes are located near to each other in yeast. [ 3 ]
Cho et al. were the first who determined that clustered genes have the same expression levels. They identified transcripts that show cell-cycle dependent periodicity. Of those genes 25% was located in close proximity to other genes which were transcript in the same cell cycle. Cohen et al. (2000) also identified clusters of co-expressed genes.
Caron et al. (2001) made a human transcriptome map of 12 different tissues (cancer cells) and concluded that genes are not randomly distributed across the chromosomes. Instead, genes tend to cluster in groups of sometimes 39 genes in close proximity. Clusters were not only gene dense. They identified 27 clusters of genes with very high expression levels and called them RIDGEs. A common RIDGE counts 6 to 30 genes per centiray. However, there were great exceptions, 40 to 50% of the RIDGEs were not that gene dense; just like in yeast these RIDGEs were located in the telomere regions. [ 1 ]
Lercher et al. (2002) pointed to some weaknesses in Caron's approach. Clusters of genes in close proximity and high transcription levels can easily been generated by tandem duplicates. Genes can generate duplicates of themselves which are incorporated in their neighborhood. These duplicates can either became a functional part of the pathway of their parent gene, or (because they are no longer favored by natural selection) gain deleterious mutations and turn into pseudogenes. Because these duplicates are false positives in the search for gene clusters they have to be excluded. Lercher excluded neighboring genes with high resemblance to each other, after that he searched with a sliding window for regions with 15 neighboring genes. [ 4 ]
It was clear that gene dense regions existed. There was a striking correlation between gene density and a high CG content. Some clusters indeed had high expression levels. But most of the highly expressed regions consisted of housekeeping genes; genes that are highly expressed in all tissues because they code for basal mechanisms. Only a minority of the clusters contained genes that were restricted to specific tissues.
Versteeg et al. (2003) tried, with a better human genome map and better SAGE taqs, to determine the characteristics of RIDGEs more specific. Overlapping genes were treated as one gene, and genes without introns were rejected as pseudogenes. They determined that RIDGEs are very gene dense, have a high gene expression, short introns, high SINE repeat density and low LINE repeat density. Clusters containing genes with very low transcription levels had characteristics that were the opposite of RIDGEs, therefore those clusters were called antiridges. [ 5 ] LINE repeats are junk DNA which contains a cleavage site of endonuclease (TTTTA). Their scarcity in RIDGEs can be explained by the fact that natural selection favors the scarcity of LINE repeats in ORFs because their endonuclease sites can cause deleterious mutation to the genes. Why SINE repeats are abundant is not yet understood.
Versteeg et al. also concluded that, contrary to Lerchers analysis, the transcription levels of many genes in RIDGEs (for example a cluster on chromosome 9) can vary strongly between different tissues. Lee et al. (2003) analyzed the trend of gene clustering between different species. They compared Saccharomyces cerevisiae , Homo sapiens , Caenorhabditis elegans , Arabidopsis thaliana and Drosophila melanogaster , and found a degree of clustering, as fraction of genes in loose clusters, of respectively (37%), (50%), (74%), (52%) and (68%). They concluded that pathways of which the genes are clusters across many species are rare. They found seven universally clustered pathways: glycolysis , aminoacyl-tRNA biosynthesis , ATP synthase , DNA polymerase , hexachlorocyclohexane degradation , cyanoamino acid metabolism , and photosynthesis ( ATP synthesis in non plant species). Not surprisingly these are basic cellular pathways. [ 6 ]
Lee et al. used very diverse groups of animals. Within these groups clustering is conserved, for example the clustering motifs of Homo sapiens and Mus musculus are more or less the same. [ 7 ]
Spellman and Rubin (2002) made a transcriptome map of Drosophila . Of all assayed genes 20% was clustered. Clusters consisted of 10 to 30 genes over a group size of about 100 kilobases. The members of the clusters were not functionally related and the location of clusters didn't correlate with know chromatin structures. [ 8 ]
This study also showed that within clusters the expression levels of on average 15 genes was much the same across the many experimental conditions which were used. These similarities were so striking that the authors reasoned that the genes in the clusters are not individually regulated by their personal promoter but that changes in the chromatin structure were involved. A similar co-regulation pattern was published in the same year by Roy et al. (2002) in C. elegans. [ 9 ]
Many genes which are grouped into clusters show the same expression profiles in human invasive ductal breast carcinomas. Roughly 20% of the genes show a correlation with their neighbors. Clusters of co-expressed genes were divided by regions with less correlation between genes. These clusters could cover an entire chromosome arm.
Contrary to previous discussed reports Johnidis et al. (2005) have discovered that (at least some) genes within clusters are not co-regulated. Aire is a transcription factor which has an up- and down-regulation effect on various genes. It functions in negative selection of thymocytes, which responds to the organisms own epitopes, by medullary cells. [ 10 ]
The genes that were controlled by aire clustered. 53 of the genes most activated by aire had an aire-activated neighbor within 200 Kb or less, and 32 of the genes most repressed by aire had an aire-repressed neighbor within 200 Kb; this is less than expected by change. They did the same screening for the transcriptional regulator CIITA.
These transcription regulators didn't have the same effect on al genes in the same cluster. Genes that were activated and repressed or unaffected were sometimes present in the same cluster. In this case, it's impossible that aire-regulated genes were clustered because they were all co-regulated.
So it is not very clear if domains are co-regulated or not. A very effective way to test this would be by insert synthetic genes into RIDGEs, antiridges and/or random places in the genome and determine their expression. Those expression levels must be compared to each other. Gierman et al. (2007) were the first who proved co-regulation using this approach. As an insertion construct they used a fluorescing GFP gene driven by the ubiquitously expressed human phosphoglycerate kinase (PGK) promoter. They integrated this construct in 90 different positions in the genome of human HEK293 cells . They found that the expression of the construct in Ridges was indeed higher than those inserted in antiridges (while all constructs have the same promoter). [ 11 ]
They investigated if these differences in expressions were due to genes in the direct neighborhood of the constructs or by the domain as a whole. They found that constructs next to highly expressed genes were slightly more expressed than others. But when to enlarged the window size to the surrounding 49 genes (domain level) they saw that constructs located in domains with an overall high expression had a more than 2-fold higher expression then those located in domains with a low expression level.
They also checked if the construct was expressed at similar levels as neighboring genes, and if that tight co-expression was present solely within RIDGEs. They found that the expressions were highly correlated within RIDGEs, and almost absent near the end and outside the RIDGEs.
Previous observations and the research of Gierman et al. proved that the activity of a domain has great impact on the expression of the genes located in it. And the genes within a RIDGE are co-expressed. However the constructs used by Gierman et al. were regulated by al full-time active promoter. The genes of the research of Johnidis et al. were dependent of the present of the aire transcription factor. The strange expression of the aire regulated genes could partly have been caused by differences in expression and conformation of the aire transcription factor itself.
It was known before the genomic era that clustered genes tend to be functionally related. Abderrahim et al. (1994) had shown that all the genes of the major histocompatibility complex were clustered on the 6p21 chromosome. Roy et al. (2002) showed that in the nematode C. elegans genes that are solely expressed in muscle tissue during the larval stage tend to cluster in small groups of 2–5 genes. They identified 13 clusters.
Yamashita et al. (2004) showed that genes related to specific functions in organs tend to cluster. Six liver related domains contained genes for xenobiotic, lipid and alcohol metabolism. Five colon-related domains had genes for apoptosis, cell proliferation, ion transporter and mucin production. These clusters were very small and expression levels were low. Brain and breast related genes didn't cluster. [ 12 ]
This shows that at least some clusters consist of functionally related genes. However, there are great exceptions. Spellman and Rubin have shown that there are clusters of co-expressed genes that are not functionally related. It seems like that clusters appear in very different forms.
Cohen et al. found that of a pair of co-expressed genes only one promoter has an Upstream Activating Sequence (UAS) associated with that expression pattern. They suggested that UASs can activate genes that are not in immediate adjacency to them. This explanation could explain the co-expression of small clusters, but many clusters contain to many genes to be regulated by a single UAS.
Chromatin changes are a plausible explanation for the co-regulation seen in clusters. Chromatin consists of the DNA strand and histones that are attached to the DNA. Regions were chromatin is very tightly packed are called heterochromatin. Heterochromatin consists very often of remains of viral genomes, transposons and other junk DNA. Because of tight packing the DNA is almost unreachable for the transcript machinery, covering deleterious DNA with proteins is the way in which the cell can protect itself. Chromatin which consists of functional genes is often an open structure were the DNA is accessible. However, most of the genes are not needed to be expressed all the time.
DNA with genes that aren't needed can be covered with histones. When a gene must be expressed special proteins can alter the chemical that are attached to the histones (histone modifications) that cause the histones to open the structure. When the chromatin of one gene is opened, the chromatin of the adjacent genes is also until this modification meets a boundary element. In that way genes is close proximity are expressed on the same time. So, genes are clustered in “expression hubs”. In comparison with this model Gilbert et al. (2004) showed that RIDGEs are mostly present in open chromatin structures. [ 13 ] [ 14 ]
However Johnidis et al. (2005) have shown that genes in the same cluster can be very differently expressed. How eukaryotic gene regulation, and associated chromatin changes, precisely works is still very unclear and there is no consensus about it. In order to get a clear picture about the mechanism of gene clusters first the workings chromatin and gene regulation needs to be illuminated.
Furthermore, most papers that identified clusters of co-regulated genes focused on transcription levels whereas few focused on clusters regulated by the same transcription-factors. Johnides et al. discovered strange phenomena when they did.
The first models which tried to explain the clustering of genes were, of course, focused on operons because they were discovered before eukaryote gene clusters were. In 1999 Lawrence proposed a model for the origin operons. This selfish operon model suggests that individual genes were grouped together by vertical en horizontal transfer and were preserved as a single unit because that was beneficial for the genes, not per se for the organism. This model predicts that the gene clusters must have conserved between species. This is not the case for many operons and gene clusters seen in eukaryotes. [ 15 ]
According to Eichler and Sankoff the two mean processes in eukaryotic chromosome evolution are 1) rearrangements of chromosomal segments and 2) localized duplication of genes. Clustering could be explained by reasoning that all genes in a cluster are originated from tandem duplicates of a common ancestor. If all co-expressed genes in a cluster were evolved from a common ancestral gene it would have been expected that they're co-expressed because they all have comparable promoters. However, gene clustering is a very common tread in genomes and it isn't clear how this duplication model could explain all of the clustering. Furthermore, many genes that are present in clusters are not homologous.
How did evolutionary non-related genes come in close proximity in the first place? Either there is a force that brings functionally related genes near to each other, or the genes came near by change. Singer et al. proposed that genes came in close proximity by random recombination of genome segments. When functionally related genes came in close proximity to each other, this proximity was conserved. They determined all possible recombination sites between genes of human and mouse. After that, they compared the clustering of the mouse and human genome and looked if recombination had occurred at the potentially recombination sites. It turned out that recombination between genes of the same cluster was very rare. So, as soon as a functional cluster is formed recombination is suppressed by the cell. On sex chromosomes, the amount of clusters is very low in both human and mouse. The authors reasoned this was due to the low rate of chromosomal rearrangements of sex chromosomes.
Open chromatin regions are active regions. It is more likely that genes will be transferred to these regions. Genes from organelle and virus genome are inserted more often in these regions. In this way non-homologous genes can be pressed together in a small domain. [ 16 ]
It is possible that some regions in the genome are better suited for important genes. It is important for the cell that genes that are responsible for basal functions are protected from recombination. It has been observed in yeast and worms that essential genes tend to cluster in regions with a small replication rate. [ 17 ]
It is possible that genes came in close proximity by change. Other models have been proposed but none of them can explain all observed phenomena. It's clear that as soon as clusters are formed they are conserved by natural selection. However, a precise model of how genes came in close proximity is still lacking.
The bulk of the present clusters must have formed relatively recent because only seven clusters of functionally related genes are conserved between phyla. Some of these differences can be explained by the fact that gene expression is very differently regulated by different phyla. For example, in vertebrates and plants DNA methylation is used, whereas it is absent in yeast and flies. [ 18 ] | https://en.wikipedia.org/wiki/Ridge_(biology) |
In image processing , ridge detection is the attempt, via software, to locate ridges in an image , defined as curves whose points are local maxima of the function, akin to geographical ridges .
For a function of N variables, its ridges are a set of curves whose points are local maxima in N − 1 dimensions. In this respect, the notion of ridge points extends the concept of a local maximum . Correspondingly, the notion of valleys for a function can be defined by replacing the condition of a local maximum with the condition of a local minimum . The union of ridge sets and valley sets, together with a related set of points called the connector set , form a connected set of curves that partition, intersect, or meet at the critical points of the function. This union of sets together is called the function's relative critical set . [ 1 ] [ 2 ]
Ridge sets, valley sets, and relative critical sets represent important geometric information intrinsic to a function. In a way, they provide a compact representation of important features of the function, but the extent to which they can be used to determine global features of the function is an open question. The primary motivation for the creation of ridge detection and valley detection procedures has come from image analysis and computer vision and is to capture the interior of elongated objects in the image domain. Ridge-related representations in terms of watersheds have been used for image segmentation . There have also been attempts to capture the shapes of objects by graph-based representations that reflect ridges, valleys and critical points in the image domain. Such representations may, however, be highly noise sensitive if computed at a single scale only. Because scale-space theoretic computations involve convolution with the Gaussian (smoothing) kernel, it has been hoped that use of multi-scale ridges, valleys and critical points in the context of scale space theory should allow for more a robust representation of objects (or shapes) in the image.
In this respect, ridges and valleys can be seen as a complement to natural interest points or local extremal points. With appropriately defined concepts, ridges and valleys in the intensity landscape (or in some other representation derived from the intensity landscape) may form a scale invariant skeleton for organizing spatial constraints on local appearance, with a number of qualitative similarities to the way the Blum's medial axis transform provides a shape skeleton for binary images . In typical applications, ridge and valley descriptors are often used for detecting roads in aerial images and for detecting blood vessels in retinal images or three-dimensional magnetic resonance images .
Let f ( x , y ) {\displaystyle f(x,y)} denote a two-dimensional function, and let L {\displaystyle L} be the scale-space representation of f ( x , y ) {\displaystyle f(x,y)} obtained by convolving f ( x , y ) {\displaystyle f(x,y)} with a Gaussian function
Furthermore, let L p p {\displaystyle L_{pp}} and L q q {\displaystyle L_{qq}} denote the eigenvalues of the Hessian matrix
of the scale-space representation L {\displaystyle L} with a coordinate transformation (a rotation) applied to local directional derivative operators,
where p and q are coordinates of the rotated coordinate system.
It can be shown that the mixed derivative L p q {\displaystyle L_{pq}} in the transformed coordinate system is zero if we choose
Then, a formal differential geometric definition of the ridges of f ( x , y ) {\displaystyle f(x,y)} at a fixed scale t {\displaystyle t} can be expressed as the set of points that satisfy [ 3 ]
Correspondingly, the valleys of f ( x , y ) {\displaystyle f(x,y)} at scale t {\displaystyle t} are the set of points
In terms of a ( u , v ) {\displaystyle (u,v)} coordinate system with the v {\displaystyle v} direction parallel to the image gradient
where
it can be shown that this ridge and valley definition can instead be equivalently [ 4 ] written as
where
and the sign of L u u {\displaystyle L_{uu}} determines the polarity; L u u < 0 {\displaystyle L_{uu}<0} for ridges and L u u > 0 {\displaystyle L_{uu}>0} for valleys.
A main problem with the fixed scale ridge definition presented above is that it can be very sensitive to the choice of the scale level. Experiments show that the scale parameter of the Gaussian pre-smoothing kernel must be carefully tuned to the width of the ridge structure in the image domain, in order for the ridge detector to produce a connected curve reflecting the underlying image structures. To handle this problem in the absence of prior information, the notion of scale-space ridges has been introduced, which treats the scale parameter as an inherent property of the ridge definition and allows the scale levels to vary along a scale-space ridge. Moreover, the concept of a scale-space ridge also allows the scale parameter to be automatically tuned to the width of the ridge structures in the image domain, in fact as a consequence of a well-stated definition. In the literature, a number of different approaches have been proposed based on this idea.
Let R ( x , y , t ) {\displaystyle R(x,y,t)} denote a measure of ridge strength (to be specified below). Then, for a two-dimensional image, a scale-space ridge is the set of points that satisfy
where t {\displaystyle t} is the scale parameter in the scale-space representation . Similarly, a scale-space valley is the set of points that satisfy
An immediate consequence of this definition is that for a two-dimensional image the concept of scale-space ridges sweeps out a set of one-dimensional curves in the three-dimensional scale-space, where the scale parameter is allowed to vary along the scale-space ridge (or the scale-space valley). The ridge descriptor in the image domain will then be a projection of this three-dimensional curve into the two-dimensional image plane, where the attribute scale information at every ridge point can be used as a natural estimate of the width of the ridge structure in the image domain in a neighbourhood of that point.
In the literature, various measures of ridge strength have been proposed. When Lindeberg (1996, 1998) [ 5 ] coined the term scale-space ridge, he considered three measures of ridge strength:
The notion of γ {\displaystyle \gamma } -normalized derivatives is essential here, since it allows the ridge and valley detector algorithms to be calibrated properly. By requiring that for a one-dimensional Gaussian ridge embedded in two (or three dimensions) the detection scale should be equal to the width of the ridge structure when measured in units of length (a requirement of a match between the size of the detection filter and the image structure it responds to), it follows that one should choose γ = 3 / 4 {\displaystyle \gamma =3/4} . Out of these three measures of ridge strength, the first entity L p p , γ − n o r m {\displaystyle L_{pp,\gamma -norm}} is a general purpose ridge strength measure with many applications such as blood vessel detection and road extraction. Nevertheless, the entity A γ − n o r m {\displaystyle A_{\gamma -norm}} has been used in applications such as fingerprint enhancement, [ 6 ] real-time hand tracking and gesture recognition [ 7 ] as well as for modelling local image statistics for detecting and tracking humans in images and video. [ 8 ]
There are also other closely related ridge definitions that make use of normalized derivatives with the implicit assumption of γ = 1 {\displaystyle \gamma =1} . [ 9 ] Develop these approaches in further detail. When detecting ridges with γ = 1 {\displaystyle \gamma =1} , however, the detection scale will be twice as large as for γ = 3 / 4 {\displaystyle \gamma =3/4} , resulting in more shape distortions and a lower ability to capture ridges and valleys with nearby interfering image structures in the image domain.
The notion of ridges and valleys in digital images was introduced by Haralick in 1983 [ 10 ] and by Crowley concerning difference of Gaussians pyramids in 1984. [ 11 ] [ 12 ] The application of ridge descriptors to medical image analysis has been extensively studied by Pizer and his co-workers [ 13 ] [ 14 ] [ 15 ] resulting in their notion of M-reps. [ 16 ] Ridge detection has also been furthered by Lindeberg with the introduction of γ {\displaystyle \gamma } -normalized derivatives and scale-space ridges defined from local maximization of the appropriately normalized main principal curvature of the Hessian matrix (or other measures of ridge strength) over space and over scale. These notions have later been developed with application to road extraction by Steger et al. [ 17 ] [ 18 ] and to blood vessel segmentation by Frangi et al. [ 19 ] as well as to the detection of curvilinear and tubular structures by Sato et al. [ 20 ] and Krissian et al. [ 21 ] A review of several of the classical ridge definitions at a fixed scale including relations between them has been given by Koenderink and van Doorn. [ 22 ] A review of vessel extraction techniques has been presented by Kirbas and Quek. [ 23 ]
In its broadest sense, the notion of ridge generalizes the idea of a local maximum of a real-valued function. A point x 0 {\displaystyle \mathbf {x} _{0}} in the domain of a function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } is a local maximum of the function if there is a distance δ > 0 {\displaystyle \delta >0} with the property that if x {\displaystyle \mathbf {x} } is within δ {\displaystyle \delta } units of x 0 {\displaystyle \mathbf {x} _{0}} , then f ( x ) < f ( x 0 ) {\displaystyle f(\mathbf {x} )<f(\mathbf {x} _{0})} . It is well known that critical points, of which local maxima are just one type, are isolated points in a function's domain in all but the most unusual situations ( i.e. , the nongeneric cases).
Consider relaxing the condition that f ( x ) < f ( x 0 ) {\displaystyle f(\mathbf {x} )<f(\mathbf {x} _{0})} for x {\displaystyle \mathbf {x} } in an entire neighborhood of x 0 {\displaystyle \mathbf {x} _{0}} slightly to require only that this hold on an n − 1 {\displaystyle n-1} dimensional subset. Presumably this relaxation allows the set of points which satisfy the criteria, which we will call the ridge, to have a single degree of freedom, at least in the generic case. This means that the set of ridge points will form a 1-dimensional locus, or a ridge curve. Notice that the above can be modified to generalize the idea to local minima and result in what might call 1-dimensional valley curves.
This following ridge definition follows the book by Eberly [ 24 ] and can be seen as a generalization of some of the abovementioned ridge definitions. Let U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} be an open set, and f : U → R {\displaystyle f:U\rightarrow \mathbb {R} } be smooth. Let x 0 ∈ U {\displaystyle \mathbf {x} _{0}\in U} . Let ∇ x 0 f {\displaystyle \nabla _{\mathbf {x} _{0}}f} be the gradient of f {\displaystyle f} at x 0 {\displaystyle \mathbf {x} _{0}} , and let H x 0 ( f ) {\displaystyle H_{\mathbf {x} _{0}}(f)} be the n × n {\displaystyle n\times n} Hessian matrix of f {\displaystyle f} at x 0 {\displaystyle \mathbf {x} _{0}} . Let λ 1 ≤ λ 2 ≤ ⋯ ≤ λ n {\displaystyle \lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}} be the n {\displaystyle n} ordered eigenvalues of H x 0 ( f ) {\displaystyle H_{\mathbf {x} _{0}}(f)} and let e i {\displaystyle \mathbf {e} _{i}} be a unit eigenvector in the eigenspace for λ i {\displaystyle \lambda _{i}} . (For this, one should assume that all the eigenvalues are distinct.)
The point x 0 {\displaystyle \mathbf {x} _{0}} is a point on the 1-dimensional ridge of f {\displaystyle f} if the following conditions hold:
This makes precise the concept that f {\displaystyle f} restricted to this particular n − 1 {\displaystyle n-1} -dimensional subspace has a local maximum at x 0 {\displaystyle \mathbf {x} _{0}} .
This definition naturally generalizes to the k -dimensional ridge as follows: the point x 0 {\displaystyle \mathbf {x} _{0}} is a point on the k -dimensional ridge of f {\displaystyle f} if the following conditions hold:
In many ways, these definitions naturally generalize that of a local maximum of a function. Properties of maximal convexity ridges are put on a solid mathematical footing by Damon [ 1 ] and Miller. [ 2 ] Their properties in one-parameter families was established by Keller. [ 25 ]
The following definition can be traced to Fritsch [ 26 ] who was interested in extracting geometric information about figures in two dimensional greyscale images. Fritsch filtered his image with a "medialness" filter that gave him information analogous to "distant to the boundary" data in scale-space. Ridges of this image, once projected to the original image, were to be analogous to a shape skeleton ( e.g. , the Blum medial axis ) of the original image.
What follows is a definition for the maximal scale ridge of a function of three variables, one of which is a "scale" parameter. One thing that we want to be true in this definition is, if ( x , σ ) {\displaystyle (\mathbf {x} ,\sigma )} is a point on this ridge, then the value of the function at the point is maximal in the scale dimension. Let f ( x , σ ) {\displaystyle f(\mathbf {x} ,\sigma )} be a smooth differentiable function on U ⊂ R 2 × R + {\displaystyle U\subset \mathbb {R} ^{2}\times \mathbb {R} _{+}} . The ( x , σ ) {\displaystyle (\mathbf {x} ,\sigma )} is a point on the maximal scale ridge if and only if
The purpose of ridge detection is usually to capture the major axis of symmetry of an elongated object, [ citation needed ] whereas the purpose of edge detection is usually to capture the boundary of the object. However, some literature on edge detection erroneously [ citation needed ] includes the notion of ridges into the concept of edges, which confuses the situation.
In terms of definitions, there is a close connection between edge detectors and ridge detectors. With the formulation of non-maximum as given by Canny, [ 27 ] it holds that edges are defined as the points where the gradient magnitude assumes a local maximum in the gradient direction. Following a differential geometric way of expressing this definition, [ 28 ] we can in the above-mentioned ( u , v ) {\displaystyle (u,v)} -coordinate system state that the gradient magnitude of the scale-space representation, which is equal to the first-order directional derivative in the v {\displaystyle v} -direction L v {\displaystyle L_{v}} , should have its first order directional derivative in the v {\displaystyle v} -direction equal to zero
while the second-order directional derivative in the v {\displaystyle v} -direction of L v {\displaystyle L_{v}} should be negative, i.e.,
Written out as an explicit expression in terms of local partial derivatives L x {\displaystyle L_{x}} , L y {\displaystyle L_{y}} ... L y y y {\displaystyle L_{yyy}} , this edge definition can be expressed as the zero-crossing curves of the differential invariant
that satisfy a sign-condition on the following differential invariant
(see the article on edge detection for more information). Notably, the edges obtained in this way are the ridges of the gradient magnitude. | https://en.wikipedia.org/wiki/Ridge_detection |
In atomic physics, a ridged mirror (or ridged atomic mirror , or Fresnel diffraction mirror ) is a kind of atomic mirror , designed for the specular reflection of neutral particles ( atoms ) coming at a grazing incidence angle . In order to reduce the mean attraction of particles to the surface and increase the reflectivity, this surface has narrow ridges. [ 1 ]
Various estimates for the efficiency of quantum reflection of waves from ridged mirror were discussed in the literature. All the estimates explicitly use the de Broglie theory about wave properties of reflected atoms.
The ridges enhance the quantum reflection from the surface, reducing the effective constant C {\displaystyle ~C~} of the van der Waals attraction of atoms to the surface. Such interpretation leads to the estimate of the reflectivity
where ℓ {\displaystyle ~\ell ~} is width of the ridges, L {\displaystyle ~L~} is distance between ridges, θ {\displaystyle \displaystyle ~\theta ~} is grazing angle , and K = m V / ℏ {\displaystyle ~K=mV/\hbar ~} is wavenumber and r 0 ( C , k ) {\displaystyle ~r_{0}(C,k)~} is coefficient of reflection of atoms with wavenumber k {\displaystyle ~k~} from a flat surface at the normal incidence. Such estimate predicts the enhancement of the reflectivity at the increase of period L {\displaystyle ~L~} ; this estimate is valid at K L θ 2 ≪ 1 {\displaystyle KL\!~\theta ^{2}\ll 1} . See quantum reflection for the approximation (fit) of the function r 0 {\displaystyle ~r_{0}~} .
For narrow ridges with large period L {\displaystyle L} , the ridges just blocks the part of the wavefront. Then, it can be interpreted in terms of the Fresnel diffraction [ 2 ] [ 3 ] of the de Broglie wave , or the Zeno effect ; [ 4 ] such interpretation leads to the estimate the reflectivity
where the grazing angle θ {\displaystyle \displaystyle ~\theta ~} is supposed to be small. This estimate predicts enhancement of the reflectivity at the reduction of period L {\displaystyle ~L~} . This estimate requires that ℓ / L ≪ 1 {\displaystyle ~\ell /L\ll 1~} .
For efficient ridged mirrors, both estimates above should predict high reflectivity. This implies reduction of both, width, ℓ {\displaystyle \ell } of the ridges and the period, L {\displaystyle L} . The width of the ridges cannot be smaller than the size of an atom; this sets the limit of performance of the ridged mirrors. [ 5 ]
Ridged mirrors are not yet commercialized, although certain achievements can be mentioned. The reflectivity of a ridged atomic mirror can be orders of magnitude better than that of a flat surface. The use of a ridged mirror as an atomic hologram has been demonstrated.
In Shimizu's and Fujita's work, [ 6 ] atom holography is achieved via electrodes implanted into SiN 4 film over an atomic mirror, or maybe as the atomic mirror itself.
Ridged mirrors can also reflect visible light ; [ 5 ] however, for light waves, the performance is not better than that of a flat surface. An ellipsoidal ridged mirror is proposed as the focusing element for an atomic optical system with submicrometre resolution ( atomic nanoscope ). | https://en.wikipedia.org/wiki/Ridged_mirror |
In solid state physics the Ridley–Watkins–Hilsum theory ( RWH ) explains the mechanism by which differential negative resistance is developed in a bulk solid state semiconductor material when a voltage is applied to the terminals of the sample. [ 1 ] It is the theory behind the operation of the Gunn diode as well as several other microwave semiconductor devices, which are used practically in electronic oscillators to produce microwave power. It is named for British physicists Brian Ridley , [ 2 ] Tom Watkins and Cyril Hilsum who wrote theoretical papers on the effect in 1961.
Negative resistance oscillations in bulk semiconductors had been observed in the laboratory by J. B. Gunn in 1962, [ 3 ] and were thus named the "Gunn effect", but physicist Herbert Kroemer pointed out in 1964 that Gunn's observations could be explained by the RWH theory. [ 4 ]
In essence, RWH mechanism is the transfer of conduction electrons in a semiconductor from a high mobility valley to lower-mobility, higher-energy satellite valleys. This phenomenon can only be observed in materials that have such energy band structures.
Normally, in a conductor, increasing electric field causes higher charge carrier (usually electron) speeds and results in higher current consistent with Ohm's law . In a multi-valley semiconductor, though, higher energy may push the carriers into a higher energy state where they actually have higher effective mass and thus slow down. In effect, carrier velocities and current drop as the voltage is increased. While this transfer occurs, the material exhibits a decrease in current – that is, a negative differential resistance. At higher voltages, the normal increase of current with voltage relation resumes once the bulk of the carriers are kicked into the higher energy-mass valley. Therefore the negative resistance only occurs over a limited range of voltages.
Of the type of semiconducting materials satisfying these conditions, gallium arsenide (GaAs) is the most widely understood and used. However RWH mechanisms can also be observed in indium phosphide (InP), cadmium telluride (CdTe), zinc selenide (ZnSe) and indium arsenide (InAs) under hydrostatic or uniaxial pressure.
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ridley–Watkins–Hilsum_theory |
Rieche formylation is a type of formylation reaction . The substrates are electron rich aromatic compounds , such as mesitylene [ 1 ] or phenols , [ 2 ] with dichloromethyl methyl ether acting as the formyl source. The catalyst is titanium tetrachloride and the workup is acidic. The reaction is named after Alfred Rieche who discovered it in 1960. [ 3 ]
Reimer–Tiemann reaction . | https://en.wikipedia.org/wiki/Rieche_formylation |
A Rieke metal is a highly reactive metal powder generated by reduction of a metal salt with an alkali metal. These materials are named after Reuben D. Rieke, who first described along with an associate in 1972 the recipes for their preparation. [ 1 ] In 1974 he told about Rieke-magnesium. [ 2 ] A 1989 paper by Rieke lists several metals that are allowed by the periodic table to be produced by his process: Cd , Zn , Ni , Pt , Pd , Fe , In , Tl , Co , Cr , Mo , W , Cu , which in turn are called Rieke-nickel, Rieke-platinum, etc. [ 3 ]
Rieke metals are highly reactive because they have high surface area and lack surface oxides that can retard reaction of bulk materials. The particles are very small, ranging from 1-2 μm down to 0.1 μm or less. Some metals like nickel and copper give black colloidal suspensions that do not settle, even with centrifugation , and cannot be filtered. Other metals such as magnesium and cobalt give larger particles, but these are found to be composed mainly of the alkali salt by-product, with the metal dispersed in them as much finer particles or even as an amorphous phase. [ 3 ]
Rieke metals are usually prepared by a reduction of an anhydrous metal chloride with an alkali metal , in a suitable solvent. [ 4 ] [ 3 ] For example, Rieke magnesium can be prepared from magnesium chloride with potassium as the reductant: [ 5 ] [ 6 ] [ 4 ]
Rieke originally described three general procedures:
The alkali metal chloride coprecipitates with the finely divided metal, which can be used in situ or separated by washing away the alkali chloride with a suitable solvent. [ 3 ]
Rieke zinc has attracted the greatest attention of all the Rieke metals. Interest is motivated by the ability of Rieke Zn to convert 2,5-dibromothiophenes to the corresponding polythiophene . [ 8 ] Rieke-Zn also reacts with bromoesters to give organozinc reagents of value for the Reformatsky reaction . [ 9 ]
Rieke magnesium reacts with aryl halides, some even at −78 °C, to afford the corresponding Grignard reagents , often with considerable selectivity. [ 10 ] Rieke magnesium is famous for enabling the formation of "impossible Grignard reagents" such as those derived from aryl fluorides and from 2-chloronorbornane. [ 5 ]
The use of highly reactive metals in chemical synthesis was popularized in the 1960s. One development in this theme is the use of metal vapor synthesis , as described by Skell, [ citation needed ] Timms, [ 11 ] Ozin, [ citation needed ] and others. All of these methods relied on elaborate instrumentation to vaporize the metals, releasing an atomic form of these reactants.
In 1972, Reuben D. Rieke, a professor of chemistry at the University of North Carolina, published the method that now bears his name. [ 12 ] In contrast to previous methods, it did not require special equipment, and the main challenges were only the handling of pyrophoric reagents and/or products, and the need for anhydrous reagents and air-free techniques . Thus his discovery gained much attention because of its simplicity and the reactivity of the activated metals.
Rieke continued this work at the University of Nebraska-Lincoln . He and his wife Loretta founded Rieke Metals LLC in 1991, based on these materials. [ 13 ]
Production and use of Rieke metals often involves the handling of highly pyrophoric materials, requiring the use of air-free techniques . | https://en.wikipedia.org/wiki/Rieke_metal |
Riemann invariants are mathematical transformations made on a system of conservation equations to make them more easily solvable. Riemann invariants are constant along the characteristic curves of the partial differential equations where they obtain the name invariant . They were first obtained by Bernhard Riemann in his work on plane waves in gas dynamics. [ 1 ]
Consider the set of conservation equations :
where A i j {\displaystyle A_{ij}} and a i j {\displaystyle a_{ij}} are the elements of the matrices A {\displaystyle \mathbf {A} } and a {\displaystyle \mathbf {a} } where l i {\displaystyle l_{i}} and b i {\displaystyle b_{i}} are elements of vectors . It will be asked if it is possible to rewrite this equation to
To do this curves will be introduced in the ( x , t ) {\displaystyle (x,t)} plane defined by the vector field ( α , β ) {\displaystyle (\alpha ,\beta )} . The term in the brackets will be rewritten in terms of a total derivative where x , t {\displaystyle x,t} are parametrized as x = X ( η ) , t = T ( η ) {\displaystyle x=X(\eta ),t=T(\eta )}
comparing the last two equations we find
which can be now written in characteristic form
where we must have the conditions
where m j {\displaystyle m_{j}} can be eliminated to give the necessary condition
so for a nontrivial solution is the determinant
For Riemann invariants we are concerned with the case when the matrix A {\displaystyle \mathbf {A} } is an identity matrix to form
notice this is homogeneous due to the vector n {\displaystyle \mathbf {n} } being zero. In characteristic form the system is
Where l {\displaystyle l} is the left eigenvector of the matrix A {\displaystyle \mathbf {A} } and λ ′ s {\displaystyle \lambda 's} is the characteristic speeds of the eigenvalues of the matrix A {\displaystyle \mathbf {A} } which satisfy
To simplify these characteristic equations we can make the transformations such that d r d t = l i d u i d t {\displaystyle {\frac {dr}{dt}}=l_{i}{\frac {du_{i}}{dt}}}
which form
An integrating factor μ {\displaystyle \mu } can be multiplied in to help integrate this. So the system now has the characteristic form
which is equivalent to the diagonal system [ 2 ]
The solution of this system can be given by the generalized hodograph method . [ 3 ] [ 4 ]
Consider the one-dimensional Euler equations written in terms of density ρ {\displaystyle \rho } and velocity u {\displaystyle u} are
with c {\displaystyle c} being the speed of sound is introduced on account of isentropic assumption. Write this system in matrix form
where the matrix a {\displaystyle \mathbf {a} } from the analysis above the eigenvalues and eigenvectors need to be found. The eigenvalues are found to satisfy
to give
and the eigenvectors are found to be
where the Riemann invariants are
( J + {\displaystyle J_{+}} and J − {\displaystyle J_{-}} are the widely used notations in gas dynamics ). For perfect gas with constant specific heats, there is the relation c 2 = const γ ρ γ − 1 {\displaystyle c^{2}={\text{const}}\,\gamma \rho ^{\gamma -1}} , where γ {\displaystyle \gamma } is the specific heat ratio , to give the Riemann invariants [ 5 ] [ 6 ]
to give the equations
In other words,
where C + {\displaystyle C_{+}} and C − {\displaystyle C_{-}} are the characteristic curves. This can be solved by the hodograph transformation . In the hodographic plane, if all the characteristics collapses into a single curve, then we obtain simple waves . If the matrix form of the system of pde's is in the form
Then it may be possible to multiply across by the inverse matrix A − 1 {\displaystyle A^{-1}} so long as the matrix determinant of A {\displaystyle \mathbf {A} } is not zero. | https://en.wikipedia.org/wiki/Riemann_invariant |
A Riemann problem , named after Bernhard Riemann , is a specific initial value problem composed of a conservation equation together with piecewise constant initial data which has a single discontinuity in the domain of interest. The Riemann problem is very useful for the understanding of equations like Euler conservation equations because all properties, such as shocks and rarefaction waves, appear as characteristics in the solution. It also gives an exact solution to some complex nonlinear equations, such as the Euler equations .
In numerical analysis , Riemann problems appear in a natural way in finite volume methods for the solution of conservation law equations due to the discreteness of the grid. For that it is widely used in computational fluid dynamics and in computational magnetohydrodynamics simulations. In these fields, Riemann problems are calculated using Riemann solvers .
As a simple example, we investigate the properties of the one-dimensional Riemann problem
in gas dynamics (Toro, Eleuterio F. (1999). Riemann Solvers and Numerical Methods for Fluid Dynamics, Pg 44, Example 2.5)
The initial conditions are given by
where x = 0 separates two different states, together with the linearised gas dynamic equations (see gas dynamics for derivation).
where we can assume without loss of generality a ≥ 0 {\displaystyle a\geq 0} .
We can now rewrite the above equations in a conservative form:
where
and the index denotes the partial derivative with respect to the corresponding variable (i.e. x or t).
The eigenvalues of the system are the characteristics of the system λ 1 = − a , λ 2 = a {\displaystyle \lambda _{1}=-a,\lambda _{2}=a} . They give the propagation speed of the medium, including that of any discontinuity, which is the speed of sound here. The corresponding eigenvectors are
By decomposing the left state u L {\displaystyle u_{L}} in terms of the eigenvectors, we get for some α 1 , α 2 {\displaystyle \alpha _{1},\alpha _{2}}
Now we can solve for α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} :
Analogously
for
Using this, in the domain in between the two characteristics t = | x | / a {\displaystyle t=|x|/a} ,
we get the final constant solution:
and the (piecewise constant) solution in the entire domain t > 0 {\displaystyle t>0} :
Although this is a simple example, it still shows the basic properties. Most notably, the characteristics decompose the solution into three domains. The propagation speed
of these two equations is equivalent to the propagation speed of sound.
The fastest characteristic defines the Courant–Friedrichs–Lewy (CFL) condition, which sets the restriction for the maximum time step for which an explicit numerical method is stable. Generally as more conservation equations are used, more characteristics are involved. | https://en.wikipedia.org/wiki/Riemann_problem |
In mathematics , the Riemann series theorem , also called the Riemann rearrangement theorem , named after 19th-century German mathematician Bernhard Riemann , says that if an infinite series of real numbers is conditionally convergent , then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, and rearranged such that the new series diverges . This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent . [ 1 ] [ 2 ]
As an example, the series
converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives
which sums to infinity. Thus, the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum, such as
which evaluates to ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln( p / q ). Other rearrangements give other finite sums or do not converge to any sum.
It is a basic result that the sum of finitely many numbers does not depend on the order in which they are added. For example, 2 + 6 + 7 = 7 + 2 + 6 . The observation that the sum of an infinite sequence of numbers can depend on the ordering of the summands is commonly attributed to Augustin-Louis Cauchy in 1833. [ 3 ] He analyzed the alternating harmonic series , showing that certain rearrangements of its summands result in different limits. Around the same time, Peter Gustav Lejeune Dirichlet highlighted that such phenomena are ruled out in the context of absolute convergence , and gave further examples of Cauchy's phenomenon for some other series which fail to be absolutely convergent. [ 4 ]
In the course of his analysis of Fourier series and the theory of Riemann integration , Bernhard Riemann gave a full characterization of the rearrangement phenomena. [ 5 ] He proved that in the case of a convergent series which does not converge absolutely (known as conditional convergence ), rearrangements can be found so that the new series converges to any arbitrarily prescribed real number. [ 6 ] Riemann's theorem is now considered as a basic part of the field of mathematical analysis . [ 7 ]
For any series, one may consider the set of all possible sums, corresponding to all possible rearrangements of the summands. Riemann’s theorem can be formulated as saying that, for a series of real numbers, this set is either empty, a single point (in the case of absolute convergence), or the entire real number line (in the case of conditional convergence). In this formulation, Riemann’s theorem was extended by Paul Lévy and Ernst Steinitz to series whose summands are complex numbers or, even more generally, elements of a finite-dimensional real vector space . [ 8 ] [ 9 ] They proved that the set of possible sums forms a real affine subspace . Extensions of the Lévy–Steinitz theorem to series in infinite-dimensional spaces have been considered by a number of authors. [ 10 ]
A series ∑ n = 1 ∞ a n {\textstyle \sum _{n=1}^{\infty }a_{n}} converges if there exists a value ℓ {\displaystyle \ell } such that the sequence of the partial sums
converges to ℓ {\displaystyle \ell } . That is, for any ε > 0, there exists an integer N such that if n ≥ N , then
A series converges conditionally if the series ∑ n = 1 ∞ a n {\textstyle \sum _{n=1}^{\infty }a_{n}} converges but the series ∑ n = 1 ∞ | a n | {\textstyle \sum _{n=1}^{\infty }\left\vert a_{n}\right\vert } diverges.
A permutation is simply a bijection from the set of positive integers to itself. This means that if σ {\displaystyle \sigma } is a permutation, then for any positive integer b , {\displaystyle b,} there exists exactly one positive integer a {\displaystyle a} such that σ ( a ) = b . {\displaystyle \sigma (a)=b.} In particular, if x ≠ y {\displaystyle x\neq y} , then σ ( x ) ≠ σ ( y ) {\displaystyle \sigma (x)\neq \sigma (y)} .
Suppose that ( a 1 , a 2 , a 3 , … ) {\displaystyle (a_{1},a_{2},a_{3},\ldots )} is a sequence of real numbers , and that ∑ n = 1 ∞ a n {\textstyle \sum _{n=1}^{\infty }a_{n}} is conditionally convergent. Let M {\displaystyle M} be a real number. Then there exists a permutation σ {\displaystyle \sigma } such that
There also exists a permutation σ {\displaystyle \sigma } such that
The sum can also be rearranged to diverge to − ∞ {\displaystyle -\infty } or to fail to approach any limit, finite or infinite.
The alternating harmonic series is a classic example of a conditionally convergent series: ∑ n = 1 ∞ ( − 1 ) n + 1 n {\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}} is convergent, whereas ∑ n = 1 ∞ | ( − 1 ) n + 1 n | = ∑ n = 1 ∞ 1 n {\displaystyle \sum _{n=1}^{\infty }\left|{\frac {(-1)^{n+1}}{n}}\right|=\sum _{n=1}^{\infty }{\frac {1}{n}}} is the ordinary harmonic series , which diverges. Although in standard presentation the alternating harmonic series converges to ln(2) , its terms can be arranged to converge to any number, or even to diverge.
One instance of this is as follows. Begin with the series written in the usual order,
and rearrange and regroup the terms as:
where the pattern is: the first two terms are 1 and −1/2, whose sum is 1/2. The next term is −1/4. The next two terms are 1/3 and −1/6, whose sum is 1/6. The next term is −1/8. The next two terms are 1/5 and −1/10, whose sum is 1/10. In general, since every odd integer occurs once positively and every even integers occur once negatively (half of them as multiples of 4, the other half as twice odd integers), the sum is composed of blocks of three which can be simplified as:
Hence, the above series can in fact be written as:
which is half the sum originally, and can only equate to the original sequence if the value were zero. This series can be demonstrated to be greater than zero by the proof of Leibniz's theorem using that the second partial sum is half. [ 11 ] Alternatively, the value of ln ( 2 ) {\displaystyle \ln(2)} which it converges to, cannot be zero. Hence, the value of the sequence is shown to depend on the order in which series is computed.
It is true that the sequence:
contains all elements in the sequence:
However, since the summation is defined as ∑ n = 1 ∞ a n := lim n → ∞ ( a 1 + a 2 + ⋯ + a n ) {\displaystyle \sum _{n=1}^{\infty }a_{n}:=\lim _{n\to \infty }\left(a_{1}+a_{2}+\cdots +a_{n}\right)} and ∑ n = 1 ∞ b n := lim n → ∞ ( b 1 + b 2 + ⋯ + b n ) {\displaystyle \sum _{n=1}^{\infty }b_{n}:=\lim _{n\to \infty }\left(b_{1}+b_{2}+\cdots +b_{n}\right)} , the order of the terms can influence the limit. [ 11 ]
An efficient way to recover and generalize the result of the previous section is to use the fact that
where γ is the Euler–Mascheroni constant , and where the notation o (1) denotes a quantity that depends upon the current variable (here, the variable is n ) in such a way that this quantity goes to 0 when the variable tends to infinity.
It follows that the sum of q even terms satisfies
and by taking the difference, one sees that the sum of p odd terms satisfies
Suppose that two positive integers a and b are given, and that a rearrangement of the alternating harmonic series is formed by taking, in order, a positive terms from the alternating harmonic series, followed by b negative terms, and repeating this pattern at infinity (the alternating series itself corresponds to a = b = 1 , the example in the preceding section corresponds to a = 1, b = 2):
Then the partial sum of order ( a + b ) n of this rearranged series contains p = an positive odd terms and q = bn negative even terms, hence
It follows that the sum of this rearranged series is [ 12 ]
Suppose now that, more generally, a rearranged series of the alternating harmonic series is organized in such a way that the ratio p n / q n between the number of positive and negative terms in the partial sum of order n tends to a positive limit r . Then, the sum of such a rearrangement will be
and this explains that any real number x can be obtained as sum of a rearranged series of the alternating harmonic series: it suffices to form a rearrangement for which the limit r is equal to e 2 x / 4 .
Riemann's description of the theorem and its proof reads in full: [ 13 ]
… infinite series fall into two distinct classes, depending on whether or not they remain convergent when all the terms are made positive. In the first class the terms can be arbitrarily rearranged; in the second, on the other hand, the value is dependent on the ordering of the terms. Indeed, if we denote the positive terms of a series in the second class by a 1 , a 2 , a 3 , ... and the negative terms by − b 1 , − b 2 , − b 3 , ... then it is clear that Σ a as well as Σ b must be infinite. For if they were both finite, the series would still be convergent after making all the signs the same. If only one were infinite, then the series would diverge. Clearly now an arbitrarily given value C can be obtained by a suitable reordering of the terms. We take alternately the positive terms of the series until the sum is greater than C , and then the negative terms until the sum is less than C . The deviation from C never amounts to more than the size of the term at the last place the signs were switched. Now, since the number a as well as the numbers b become infinitely small with increasing index, so also are the deviations from C . If we proceed sufficiently far in the series, the deviation becomes arbitrarily small, that is, the series converges to C .
This can be given more detail as follows. [ 14 ] Recall that a conditionally convergent series of real terms has both infinitely many negative terms and infinitely many positive terms. First, define two quantities, a n + {\displaystyle a_{n}^{+}} and a n − {\displaystyle a_{n}^{-}} by:
That is, the series ∑ n = 1 ∞ a n + {\textstyle \sum _{n=1}^{\infty }a_{n}^{+}} includes all a n positive, with all negative terms replaced by zeroes, and the series ∑ n = 1 ∞ a n − {\textstyle \sum _{n=1}^{\infty }a_{n}^{-}} includes all a n negative, with all positive terms replaced by zeroes. Since ∑ n = 1 ∞ a n {\textstyle \sum _{n=1}^{\infty }a_{n}} is conditionally convergent, both the 'positive' and the 'negative' series diverge. Let M be any real number. Take just enough of the positive terms a n + {\displaystyle a_{n}^{+}} so that their sum exceeds M . That is, let p 1 be the smallest positive integer such that
This is possible because the partial sums of the a n + {\displaystyle a_{n}^{+}} series tend to + ∞ {\displaystyle +\infty } . Now let q 1 be the smallest positive integer such that
This number exists because the partial sums of a n − {\displaystyle a_{n}^{-}} tend to − ∞ {\displaystyle -\infty } . Now continue inductively, defining p 2 as the smallest integer larger than p 1 such that
and so on. The result may be viewed as a new sequence
Furthermore the partial sums of this new sequence converge to M . This can be seen from the fact that for any i ,
with the first inequality holding due to the fact that p i +1 has been defined as the smallest number larger than p i which makes the second inequality true; as a consequence, it holds that
Since the right-hand side converges to zero due to the assumption of conditional convergence, this shows that the ( p i +1 + q i ) 'th partial sum of the new sequence converges to M as i increases. Similarly, the ( p i +1 + q i +1 ) 'th partial sum also converges to M . Since the ( p i +1 + q i + 1) 'th, ( p i +1 + q i + 2) 'th, ... ( p i +1 + q i +1 − 1) 'th partial sums are valued between the ( p i +1 + q i ) 'th and ( p i +1 + q i +1 ) 'th partial sums, it follows that the whole sequence of partial sums converges to M .
Every entry in the original sequence a n appears in this new sequence whose partial sums converge to M . Those entries of the original sequence which are zero will appear twice in the new sequence (once in the 'positive' sequence and once in the 'negative' sequence), and every second such appearance can be removed, which does not affect the summation in any way. The new sequence is thus a permutation of the original sequence.
Let ∑ i = 1 ∞ a i {\textstyle \sum _{i=1}^{\infty }a_{i}} be a conditionally convergent series. The following is a proof that there exists a rearrangement of this series that tends to ∞ {\displaystyle \infty } (a similar argument can be used to show that − ∞ {\displaystyle -\infty } can also be attained).
The above proof of Riemann's original formulation only needs to be modified so that p i +1 is selected as the smallest integer larger than p i such that
and with q i +1 selected as the smallest integer larger than q i such that
The choice of i +1 on the left-hand sides is immaterial, as it could be replaced by any sequence increasing to infinity. Since a n − {\displaystyle a_{n}^{-}} converges to zero as n increases, for sufficiently large i there is
and this proves (just as with the analysis of convergence above) that the sequence of partial sums of the new sequence diverge to infinity.
The above proof only needs to be modified so that p i +1 is selected as the smallest integer larger than p i such that
and with q i +1 selected as the smallest integer larger than q i such that
This directly shows that the sequence of partial sums contains infinitely many entries which are larger than 1, and also infinitely many entries which are less than −1 , so that the sequence of partial sums cannot converge.
Given an infinite series a = ( a 1 , a 2 , . . . ) {\displaystyle a=(a_{1},a_{2},...)} , we may consider a set of "fixed points" I ⊂ N {\displaystyle I\subset \mathbb {N} } , and study the real numbers that the series can sum to if we are only allowed to permute indices in I {\displaystyle I} . That is, we let S ( a , I ) = { ∑ n ∈ N a π ( n ) : π is a permutation on N , such that ∀ n ∉ I , π ( n ) = n , and the summation converges. } {\displaystyle S(a,I)=\left\{\sum _{n\in \mathbb {N} }a_{\pi (n)}:\pi {\text{ is a permutation on }}\mathbb {N} ,{\text{ such that }}\forall n\not \in I,\pi (n)=n,{\text{ and the summation converges.}}\right\}} With this notation, we have:
Sierpiński proved that rearranging only the positive terms one can obtain a series converging to any prescribed value less than or equal to the sum of the original series, but larger values in general can not be attained. [ 15 ] [ 16 ] [ 17 ] That is, let a {\displaystyle a} be a conditionally convergent sum, then S ( a , { n ∈ N : a n > 0 } ) {\displaystyle S(a,\{n\in \mathbb {N} :a_{n}>0\})} contains [ − ∞ , ∑ n ∈ N a n ] {\displaystyle \left[-\infty ,\sum _{n\in \mathbb {N} }a_{n}\right]} , but there is no guarantee that it contains any other number.
More generally, let J {\displaystyle J} be an ideal of N {\displaystyle \mathbb {N} } , then we can define S ( a , J ) = ∪ I ∈ J S ( a , I ) {\displaystyle S(a,J)=\cup _{I\in J}S(a,I)} .
Let J d {\displaystyle J_{d}} be the set of all asymptotic density zero sets I ⊂ N {\displaystyle I\subset \mathbb {N} } , that is, lim n → ∞ | [ 0 , n ] ∩ I | n = 0 {\displaystyle \lim _{n\to \infty }{\frac {|[0,n]\cap I|}{n}}=0} . It's clear that J d {\displaystyle J_{d}} is an ideal of N {\displaystyle \mathbb {N} } .
(Wilczyński, 2007) [ 18 ] — If a {\displaystyle a} is a conditionally convergent sum, then S ( a , J d ) = [ − ∞ , ∞ ] {\displaystyle S(a,J_{d})=[-\infty ,\infty ]} (that is, it is sufficient to rearrange a set of indices of asymptotic density zero).
Proof sketch: Given a {\displaystyle a} , a conditionally convergent sum, construct some I ∈ J d {\displaystyle I\in J_{d}} such that ∑ n ∈ I a n {\displaystyle \sum _{n\in I}a_{n}} and ∑ n ∉ I a n {\displaystyle \sum _{n\not \in I}a_{n}} are both conditionally convergent. Then, rearranging ∑ n ∈ I a n {\displaystyle \sum _{n\in I}a_{n}} suffices to converge to any number in [ − ∞ , + ∞ ] {\displaystyle [-\infty ,+\infty ]} .
Filipów and Szuca proved that other ideals also have this property. [ 19 ]
Given a converging series ∑ a n {\textstyle \sum a_{n}} of complex numbers , several cases can occur when considering the set of possible sums for all series ∑ a σ ( n ) {\textstyle \sum a_{\sigma (n)}} obtained by rearranging (permuting) the terms of that series:
More generally, given a converging series of vectors in a finite-dimensional real vector space E , the set of sums of converging rearranged series is an affine subspace of E .
Lejeune Dirichlet, G. (1889). "Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält". In Kronecker, L. (ed.). Werke. Band I . Berlin: Dietrich Reimer Verlag. pp. 313– 342. JFM 21.0016.01 . MR 0249268 .
Riemann, Bernhard (2004). "On the representation of a function by a trigonometric series". Collected Papers . Translated by Baker, Roger; Christenson, Charles; Orde, Henry. Translation of 1892 German edition. Heber City, UT: Kendrick Press. ISBN 0-9740427-2-2 . MR 2121437 . Zbl 1101.01013 . | https://en.wikipedia.org/wiki/Riemann_series_theorem |
A Riemann solver is a numerical method used to solve a Riemann problem . They are heavily used in computational fluid dynamics and computational magnetohydrodynamics .
Generally speaking, Riemann solvers are specific methods for computing the numerical flux across a discontinuity in the Riemann problem. [ 1 ] They form an important part of high-resolution schemes ; typically the right and left states for the Riemann problem are calculated using some form of nonlinear reconstruction, such as a flux limiter or a WENO method , and then used as the input for the Riemann solver. [ 2 ]
Sergei K. Godunov is credited with introducing the first exact Riemann solver for the Euler equations, [ 3 ] by extending the previous CIR (Courant-Isaacson-Rees) method to non-linear systems of hyperbolic conservation laws. Modern solvers are able to simulate relativistic effects and magnetic fields.
More recent research shows that an exact series solution to the Riemann problem exists, which may converge fast enough in some cases to avoid the iterative methods required in Godunov's scheme. [ 4 ]
As iterative solutions are too costly, especially in magnetohydrodynamics, some approximations have to be made. Some popular solvers are:
Philip L. Roe used the linearisation of the Jacobian, which he then solves exactly. [ 5 ]
The HLLE solver (developed by Ami Harten , Peter Lax , Bram van Leer and Einfeldt) is an approximate solution to the Riemann problem, which is only based on the integral form of the conservation laws and the largest and smallest signal velocities at the interface. [ 6 ] [ 7 ] The stability and robustness of the HLLE solver is closely related to the signal velocities and a single central average state, as proposed by Einfeldt in the original paper
The HLLC (Harten-Lax-van Leer-Contact) solver was introduced by Toro. [ 8 ] It restores the missing rarefaction wave by using an estimation technique, such as linearisation. More advanced techniques exist, like using the Roe average velocity for the middle wave speed. These schemes are quite robust and efficient but somewhat more diffusive. [ 9 ]
These solvers were introduced by Hiroaki Nishikawa and Kitamura, [ 10 ] in order to overcome the carbuncle problems
of the Roe solver and the excessive diffusion of the HLLE solver at the same time. They developed robust and accurate Riemann solvers by combining the Roe solver and the HLLE/Rusanov solvers: they show that being applied in two orthogonal directions the two Riemann solvers can be combined into a single Roe-type solver (the Roe solver with modified wave speeds). In particular, the one derived from the Roe and HLLE solvers, called Rotated-RHLL solver, is extremely robust (carbuncle-free for all possible test cases on both structured and unstructured grids) and accurate (as accurate as the Roe solver for the boundary layer calculation).
There are a variety of other solvers available, including more variants of the HLL scheme [ 11 ] and solvers based on flux-splitting via characteristic decomposition. [ 12 ] | https://en.wikipedia.org/wiki/Riemann_solver |
In mathematics , a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann . One very common application is in numerical integration , i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule . It can also be applied for approximating the length of curves and other approximations.
The sum is calculated by partitioning the region into shapes ( rectangles , trapezoids , parabolas , or cubics —sometimes infinitesimally small) that together form a region that is similar to the region being measured, then calculating the area for each of these shapes, and finally adding all of these small areas together. This approach can be used to find a numerical approximation for a definite integral even if the fundamental theorem of calculus does not make it easy to find a closed-form solution .
Because the region by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral .
Let f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } be a function defined on a closed interval [ a , b ] {\displaystyle [a,b]} of the real numbers, R {\displaystyle \mathbb {R} } , and P = ( x 0 , x 1 , … , x n ) {\displaystyle P=(x_{0},x_{1},\ldots ,x_{n})} as a partition of [ a , b ] {\displaystyle [a,b]} , that is a = x 0 < x 1 < x 2 < ⋯ < x n = b . {\displaystyle a=x_{0}<x_{1}<x_{2}<\dots <x_{n}=b.} A Riemann sum S {\displaystyle S} of f {\displaystyle f} over [ a , b ] {\displaystyle [a,b]} with partition P {\displaystyle P} is defined as S = ∑ i = 1 n f ( x i ∗ ) Δ x i , {\displaystyle S=\sum _{i=1}^{n}f(x_{i}^{*})\,\Delta x_{i},} where Δ x i = x i − x i − 1 {\displaystyle \Delta x_{i}=x_{i}-x_{i-1}} and x i ∗ ∈ [ x i − 1 , x i ] {\displaystyle x_{i}^{*}\in [x_{i-1},x_{i}]} . [ 1 ] One might produce different Riemann sums depending on which x i ∗ {\displaystyle x_{i}^{*}} 's are chosen. In the end this will not matter, if the function is Riemann integrable , when the difference or width of the summands Δ x i {\displaystyle \Delta x_{i}} approaches zero.
Specific choices of x i ∗ {\displaystyle x_{i}^{*}} give different types of Riemann sums:
All these Riemann summation methods are among the most basic ways to accomplish numerical integration . Loosely speaking, a function is Riemann integrable if all Riemann sums converge as the partition "gets finer and finer".
While not derived as a Riemann sum, taking the average of the left and right Riemann sums is the trapezoidal rule and gives a trapezoidal sum . It is one of the simplest of a very general way of approximating integrals using weighted averages. This is followed in complexity by Simpson's rule and Newton–Cotes formulas .
Any Riemann sum on a given partition (that is, for any choice of x i ∗ {\displaystyle x_{i}^{*}} between x i − 1 {\displaystyle x_{i-1}} and x i {\displaystyle x_{i}} ) is contained between the lower and upper Darboux sums. This forms the basis of the Darboux integral , which is ultimately equivalent to the Riemann integral.
The four Riemann summation methods are usually best approached with subintervals of equal size. The interval [ a , b ] is therefore divided into n {\displaystyle n} subintervals, each of length Δ x = b − a n . {\displaystyle \Delta x={\frac {b-a}{n}}.}
The points in the partition will then be a , a + Δ x , a + 2 Δ x , … , a + ( n − 2 ) Δ x , a + ( n − 1 ) Δ x , b . {\displaystyle a,\;a+\Delta x,\;a+2\Delta x,\;\ldots ,\;a+(n-2)\Delta x,\;a+(n-1)\Delta x,\;b.}
For the left rule, the function is approximated by its values at the left endpoints of the subintervals. This gives multiple rectangles with base Δ x and height f ( a + i Δ x ) . Doing this for i = 0, 1, ..., n − 1 , and summing the resulting areas gives S l e f t = Δ x [ f ( a ) + f ( a + Δ x ) + f ( a + 2 Δ x ) + ⋯ + f ( b − Δ x ) ] . {\displaystyle S_{\mathrm {left} }=\Delta x\left[f(a)+f(a+\Delta x)+f(a+2\Delta x)+\dots +f(b-\Delta x)\right].}
The left Riemann sum amounts to an overestimation if f is monotonically decreasing on this interval, and an underestimation if it is monotonically increasing .
The error of this formula will be | ∫ a b f ( x ) d x − S l e f t | ≤ M 1 ( b − a ) 2 2 n , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {left} }\right\vert \leq {\frac {M_{1}(b-a)^{2}}{2n}},} where M 1 {\displaystyle M_{1}} is the maximum value of the absolute value of f ′ ( x ) {\displaystyle f^{\prime }(x)} over the interval.
For the right rule, the function is approximated by its values at the right endpoints of the subintervals. This gives multiple rectangles with base Δ x and height f ( a + i Δ x ) . Doing this for i = 1, ..., n , and summing the resulting areas gives S r i g h t = Δ x [ f ( a + Δ x ) + f ( a + 2 Δ x ) + ⋯ + f ( b ) ] . {\displaystyle S_{\mathrm {right} }=\Delta x\left[f(a+\Delta x)+f(a+2\Delta x)+\dots +f(b)\right].}
The right Riemann sum amounts to an underestimation if f is monotonically decreasing , and an overestimation if it is monotonically increasing .
The error of this formula will be | ∫ a b f ( x ) d x − S r i g h t | ≤ M 1 ( b − a ) 2 2 n , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {right} }\right\vert \leq {\frac {M_{1}(b-a)^{2}}{2n}},} where M 1 {\displaystyle M_{1}} is the maximum value of the absolute value of f ′ ( x ) {\displaystyle f^{\prime }(x)} over the interval.
For the midpoint rule, the function is approximated by its values at the midpoints of the subintervals. This gives f ( a + Δ x /2) for the first subinterval, f ( a + 3Δ x /2) for the next one, and so on until f ( b − Δ x /2) . Summing the resulting areas gives S m i d = Δ x [ f ( a + Δ x 2 ) + f ( a + 3 Δ x 2 ) + ⋯ + f ( b − Δ x 2 ) ] . {\displaystyle S_{\mathrm {mid} }=\Delta x\left[f\left(a+{\tfrac {\Delta x}{2}}\right)+f\left(a+{\tfrac {3\Delta x}{2}}\right)+\dots +f\left(b-{\tfrac {\Delta x}{2}}\right)\right].}
The error of this formula will be | ∫ a b f ( x ) d x − S m i d | ≤ M 2 ( b − a ) 3 24 n 2 , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {mid} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{24n^{2}}},} where M 2 {\displaystyle M_{2}} is the maximum value of the absolute value of f ′ ′ ( x ) {\displaystyle f^{\prime \prime }(x)} over the interval. This error is half of that of the trapezoidal sum; as such the middle Riemann sum is the most accurate approach to the Riemann sum.
A generalized midpoint rule formula, also known as the enhanced midpoint integration, is given by ∫ 0 1 f ( x ) d x = 2 ∑ m = 1 M ∑ n = 0 ∞ 1 ( 2 M ) 2 n + 1 ( 2 n + 1 ) ! f ( 2 n ) ( x ) | x = m − 1 / 2 M , {\displaystyle \int _{0}^{1}f(x)\,dx=2\sum _{m=1}^{M}{\sum _{n=0}^{\infty }{{\frac {1}{{\left(2M\right)^{2n+1}}\left({2n+1}\right)!}}{{\left.f^{(2n)}(x)\right|}_{x={\frac {m-1/2}{M}}}}}}\,\,,} where f ( 2 n ) {\displaystyle f^{(2n)}} denotes even derivative.
For a function g ( t ) {\displaystyle g(t)} defined over interval ( a , b ) {\displaystyle (a,b)} , its integral is ∫ a b g ( t ) d t = ∫ 0 b − a g ( τ + a ) d τ = ( b − a ) ∫ 0 1 g ( ( b − a ) x + a ) d x . {\displaystyle \int _{a}^{b}g(t)\,dt=\int _{0}^{b-a}g(\tau +a)\,d\tau =(b-a)\int _{0}^{1}g((b-a)x+a)\,dx.} Therefore, we can apply this generalized midpoint integration formula by assuming that f ( x ) = ( b − a ) g ( ( b − a ) x + a ) {\displaystyle f(x)=(b-a)\,g((b-a)x+a)} . This formula is particularly efficient for the numerical integration when the integrand f ( x ) {\displaystyle f(x)} is a highly oscillating function.
For the trapezoidal rule, the function is approximated by the average of its values at the left and right endpoints of the subintervals. Using the area formula 1 2 h ( b 1 + b 2 ) {\displaystyle {\tfrac {1}{2}}h(b_{1}+b_{2})} for a trapezium with parallel sides b 1 and b 2 , and height h , and summing the resulting areas gives S t r a p = 1 2 Δ x [ f ( a ) + 2 f ( a + Δ x ) + 2 f ( a + 2 Δ x ) + ⋯ + f ( b ) ] . {\displaystyle S_{\mathrm {trap} }={\tfrac {1}{2}}\Delta x\left[f(a)+2f(a+\Delta x)+2f(a+2\Delta x)+\dots +f(b)\right].}
The error of this formula will be | ∫ a b f ( x ) d x − S t r a p | ≤ M 2 ( b − a ) 3 12 n 2 , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {trap} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{12n^{2}}},} where M 2 {\displaystyle M_{2}} is the maximum value of the absolute value of f ″ ( x ) {\displaystyle f''(x)} .
The approximation obtained with the trapezoidal sum for a function is the same as the average of the left hand and right hand sums of that function.
For a one-dimensional Riemann sum over domain [ a , b ] {\displaystyle [a,b]} , as the maximum size of a subinterval shrinks to zero (that is the limit of the norm of the subintervals goes to zero), some functions will have all Riemann sums converge to the same value. This limiting value, if it exists, is defined as the definite Riemann integral of the function over the domain, ∫ a b f ( x ) d x = lim ‖ Δ x ‖ → 0 ∑ i = 1 n f ( x i ∗ ) Δ x i . {\displaystyle \int _{a}^{b}f(x)\,dx=\lim _{\|\Delta x\|\rightarrow 0}\sum _{i=1}^{n}f(x_{i}^{*})\,\Delta x_{i}.}
For a finite-sized domain, if the maximum size of a subinterval shrinks to zero, this implies the number of subinterval goes to infinity. For finite partitions, Riemann sums are always approximations to the limiting value and this approximation gets better as the partition gets finer. The following animations help demonstrate how increasing the number of subintervals (while lowering the maximum subinterval size) better approximates the "area" under the curve:
Since the red function here is assumed to be a smooth function , all three Riemann sums will converge to the same value as the number of subintervals goes to infinity.
Taking an example, the area under the curve y = x 2 over [0, 2] can be procedurally computed using Riemann's method.
The interval [0, 2] is firstly divided into n subintervals, each of which is given a width of 2 n {\displaystyle {\tfrac {2}{n}}} ; these are the widths of the Riemann rectangles (hereafter "boxes"). Because the right Riemann sum is to be used, the sequence of x coordinates for the boxes will be x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} . Therefore, the sequence of the heights of the boxes will be x 1 2 , x 2 2 , … , x n 2 {\displaystyle x_{1}^{2},x_{2}^{2},\ldots ,x_{n}^{2}} . It is an important fact that x i = 2 i n {\displaystyle x_{i}={\tfrac {2i}{n}}} , and x n = 2 {\displaystyle x_{n}=2} .
The area of each box will be 2 n × x i 2 {\displaystyle {\tfrac {2}{n}}\times x_{i}^{2}} and therefore the n th right Riemann sum will be: S = 2 n ( 2 n ) 2 + ⋯ + 2 n ( 2 i n ) 2 + ⋯ + 2 n ( 2 n n ) 2 = 8 n 3 ( 1 + ⋯ + i 2 + ⋯ + n 2 ) = 8 n 3 ( n ( n + 1 ) ( 2 n + 1 ) 6 ) = 8 n 3 ( 2 n 3 + 3 n 2 + n 6 ) = 8 3 + 4 n + 4 3 n 2 . {\displaystyle {\begin{aligned}S&={\frac {2}{n}}\left({\frac {2}{n}}\right)^{2}+\dots +{\frac {2}{n}}\left({\frac {2i}{n}}\right)^{2}+\dots +{\frac {2}{n}}\left({\frac {2n}{n}}\right)^{2}\\[1ex]&={\frac {8}{n^{3}}}\left(1+\dots +i^{2}+\dots +n^{2}\right)\\[1ex]&={\frac {8}{n^{3}}}\left({\frac {n(n+1)(2n+1)}{6}}\right)\\[1ex]&={\frac {8}{n^{3}}}\left({\frac {2n^{3}+3n^{2}+n}{6}}\right)\\[1ex]&={\frac {8}{3}}+{\frac {4}{n}}+{\frac {4}{3n^{2}}}.\end{aligned}}}
If the limit is viewed as n → ∞, it can be concluded that the approximation approaches the actual value of the area under the curve as the number of boxes increases. Hence: lim n → ∞ S = lim n → ∞ ( 8 3 + 4 n + 4 3 n 2 ) = 8 3 . {\displaystyle \lim _{n\to \infty }S=\lim _{n\to \infty }\left({\frac {8}{3}}+{\frac {4}{n}}+{\frac {4}{3n^{2}}}\right)={\frac {8}{3}}.}
This method agrees with the definite integral as calculated in more mechanical ways: ∫ 0 2 x 2 d x = 8 3 . {\displaystyle \int _{0}^{2}x^{2}\,dx={\frac {8}{3}}.}
Because the function is continuous and monotonically increasing over the interval, a right Riemann sum overestimates the integral by the largest amount (while a left Riemann sum would underestimate the integral by the largest amount). This fact, which is intuitively clear from the diagrams, shows how the nature of the function determines how accurate the integral is estimated. While simple, right and left Riemann sums are often less accurate than more advanced techniques of estimating an integral such as the Trapezoidal rule or Simpson's rule .
The example function has an easy-to-find anti-derivative so estimating the integral by Riemann sums is mostly an academic exercise; however it must be remembered that not all functions have anti-derivatives so estimating their integrals by summation is practically important.
The basic idea behind a Riemann sum is to "break-up" the domain via a partition into pieces, multiply the "size" of each piece by some value the function takes on that piece, and sum all these products. This can be generalized to allow Riemann sums for functions over domains of more than one dimension.
While intuitively, the process of partitioning the domain is easy to grasp, the technical details of how the domain may be partitioned get much more complicated than the one dimensional case and involves aspects of the geometrical shape of the domain. [ 4 ]
In two dimensions, the domain A {\displaystyle A} may be divided into a number of two-dimensional cells A i {\displaystyle A_{i}} such that A = ⋃ i A i {\textstyle A=\bigcup _{i}A_{i}} . Each cell then can be interpreted as having an "area" denoted by Δ A i {\displaystyle \Delta A_{i}} . [ 5 ] The two-dimensional Riemann sum is S = ∑ i = 1 n f ( x i ∗ , y i ∗ ) Δ A i , {\displaystyle S=\sum _{i=1}^{n}f(x_{i}^{*},y_{i}^{*})\,\Delta A_{i},} where ( x i ∗ , y i ∗ ) ∈ A i {\displaystyle (x_{i}^{*},y_{i}^{*})\in A_{i}} .
In three dimensions, the domain V {\displaystyle V} is partitioned into a number of three-dimensional cells V i {\displaystyle V_{i}} such that V = ⋃ i V i {\textstyle V=\bigcup _{i}V_{i}} . Each cell then can be interpreted as having a "volume" denoted by Δ V i {\displaystyle \Delta V_{i}} . The three-dimensional Riemann sum is [ 6 ] S = ∑ i = 1 n f ( x i ∗ , y i ∗ , z i ∗ ) Δ V i , {\displaystyle S=\sum _{i=1}^{n}f(x_{i}^{*},y_{i}^{*},z_{i}^{*})\,\Delta V_{i},} where ( x i ∗ , y i ∗ , z i ∗ ) ∈ V i {\displaystyle (x_{i}^{*},y_{i}^{*},z_{i}^{*})\in V_{i}} .
Higher dimensional Riemann sums follow a similar pattern. An n -dimensional Riemann sum is S = ∑ i f ( P i ∗ ) Δ V i , {\displaystyle S=\sum _{i}f(P_{i}^{*})\,\Delta V_{i},} where P i ∗ ∈ V i {\displaystyle P_{i}^{*}\in V_{i}} , that is, it is a point in the n -dimensional cell V i {\displaystyle V_{i}} with n -dimensional volume Δ V i {\displaystyle \Delta V_{i}} .
In high generality, Riemann sums can be written S = ∑ i f ( P i ∗ ) μ ( V i ) , {\displaystyle S=\sum _{i}f(P_{i}^{*})\mu (V_{i}),} where P i ∗ {\displaystyle P_{i}^{*}} stands for any arbitrary point contained in the set V i {\displaystyle V_{i}} and μ {\displaystyle \mu } is a measure on the underlying set. Roughly speaking, a measure is a function that gives a "size" of a set, in this case the size of the set V i {\displaystyle V_{i}} ; in one dimension this can often be interpreted as a length, in two dimensions as an area, in three dimensions as a volume, and so on. | https://en.wikipedia.org/wiki/Riemann_sum |
In mathematical general relativity , the Penrose inequality , first conjectured by Sir Roger Penrose , estimates the mass of a spacetime in terms of the total area of its black holes and is a generalization of the positive mass theorem . The Riemannian Penrose inequality is an important special case. Specifically, if ( M , g ) is an asymptotically flat Riemannian 3-manifold with nonnegative scalar curvature and ADM mass m , and A is the area of the outermost minimal surface (possibly with multiple connected components ), then the Riemannian Penrose inequality asserts
This is purely a geometrical fact, and it corresponds to the case of a complete three-dimensional, space-like , totally geodesic submanifold of a (3 + 1)-dimensional spacetime. Such a submanifold is often called a time-symmetric initial data set for a spacetime. The condition of ( M , g ) having nonnegative scalar curvature is equivalent to the spacetime obeying the dominant energy condition .
This inequality was first proved by Gerhard Huisken and Tom Ilmanen in 1997 in the case where A is the area of the largest component of the outermost minimal surface. Their proof relied on the machinery of weakly defined inverse mean curvature flow , which they developed. In 1999, Hubert Bray gave the first complete proof of the above inequality using a conformal flow of metrics. Both of the papers were published in 2001.
The original physical argument that led Penrose to conjecture such an inequality invoked the Hawking area theorem and the cosmic censorship hypothesis .
Both the Bray and Huisken–Ilmanen proofs of the Riemannian Penrose inequality state that under the hypotheses, if
then the manifold in question is isometric to a slice of the Schwarzschild spacetime outside its outermost minimal surface, which is a sphere of Schwarzschild radius .
More generally, Penrose conjectured that an inequality as above should hold for spacelike submanifolds of spacetimes that are not necessarily time-symmetric. In this case, nonnegative scalar curvature is replaced with the dominant energy condition , and one possibility is to replace the minimal surface condition with an apparent horizon condition. Proving such an inequality remains an open problem in general relativity, called the Penrose conjecture.
This relativity -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Riemannian_Penrose_inequality |
Computational anatomy (CA) is the study of shape and form in medical imaging . The study of deformable shapes in CA rely on high-dimensional diffeomorphism groups Diff V {\displaystyle \operatorname {Diff} _{V}} which generate orbits of the form M ≐ { φ ⋅ m ∣ φ ∈ Diff V } {\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot m\mid \varphi \in \operatorname {Diff} _{V}\}} . In CA, this orbit is in general considered a smooth Riemannian manifold since at every point of the manifold m ∈ M {\displaystyle m\in {\mathcal {M}}} there is an inner product inducing the norm ‖ ⋅ ‖ m {\displaystyle \|\cdot \|_{m}} on the tangent space that varies smoothly from point to point in the manifold of shapes m ∈ M {\displaystyle m\in {\mathcal {M}}} . This is generated by viewing the
group of diffeomorphisms φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} as a Riemannian manifold with ‖ ⋅ ‖ φ {\displaystyle \|\cdot \|_{\varphi }} , associated to the tangent space at φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} . This induces the norm and metric on the orbit m ∈ M {\displaystyle m\in {\mathcal {M}}} under the action from the group of diffeomorphisms.
The diffeomorphisms in computational anatomy are generated to satisfy the Lagrangian and Eulerian specification of the flow fields , φ t , t ∈ [ 0 , 1 ] {\displaystyle \varphi _{t},t\in [0,1]} , generated via the ordinary differential equation
with the Eulerian vector fields v ≐ ( v 1 , v 2 , v 3 ) {\displaystyle v\doteq (v_{1},v_{2},v_{3})} in R 3 {\displaystyle {\mathbb {R} }^{3}} for v t = φ ˙ t ∘ φ t − 1 , t ∈ [ 0 , 1 ] {\displaystyle v_{t}={\dot {\varphi }}_{t}\circ \varphi _{t}^{-1},t\in [0,1]} , with the inverse for the flow given by
and the 3 × 3 {\displaystyle 3\times 3} Jacobian matrix for flows in R 3 {\displaystyle \mathbb {R} ^{3}} given as D φ ≐ ( ∂ φ i ∂ x j ) . {\displaystyle \ D\varphi \doteq \left({\frac {\partial \varphi _{i}}{\partial x_{j}}}\right).}
To ensure smooth flows of diffeomorphisms with inverse, the vector fields R 3 {\displaystyle {\mathbb {R} }^{3}} must be at least 1-time continuously differentiable in space [ 1 ] [ 2 ] which are modelled as elements of the Hilbert space ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} using the Sobolev embedding theorems so that each element v i ∈ H 0 3 , i = 1 , 2 , 3 , {\displaystyle v_{i}\in H_{0}^{3},i=1,2,3,} has 3-square-integrable derivatives thusly implies ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} embeds smoothly in 1-time continuously differentiable functions. [ 1 ] [ 2 ] The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm:
Shapes in Computational Anatomy (CA) are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. In this setting, 3-dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the template I t e m p {\displaystyle I_{temp}} , resulting in the observed images to be elements of the random orbit model of CA . For images these are defined as I ∈ I ≐ { I = I t e m p ∘ φ , φ ∈ Diff V } {\displaystyle I\in {\mathcal {I}}\doteq \{I=I_{temp}\circ \varphi ,\varphi \in \operatorname {Diff} _{V}\}} , with for charts representing sub-manifolds denoted as M ≐ { φ ⋅ m t e m p : φ ∈ Diff V } {\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot m_{temp}:\varphi \in \operatorname {Diff} _{V}\}} .
The orbit of shapes and forms in Computational Anatomy are generated by the group action M ≐ { φ ⋅ m : φ ∈ Diff V } {\displaystyle {\mathcal {M}}\doteq \{\varphi \cdot m:\varphi \in \operatorname {Diff} _{V}\}} . This is made into a Riemannian orbit by introducing a metric associated to each point and associated tangent space. For this a metric is defined on the group which induces the metric on the orbit. Take as the metric for Computational anatomy at each element of the tangent space φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} in the group of diffeomorphisms
with the vector fields modelled to be in a Hilbert space with the norm in the Hilbert space ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} . We model V {\displaystyle V} as a reproducing kernel Hilbert space (RKHS) defined by a 1-1, differential operator A : V → V ∗ {\displaystyle A:V\rightarrow V^{*}} . For σ ( v ) ≐ A v ∈ V ∗ {\displaystyle \sigma (v)\doteq Av\in V^{*}} a distribution or generalized function, the linear form ( σ ∣ w ) ≐ ∫ R 3 ∑ i w i ( x ) σ i ( d x ) {\displaystyle (\sigma \mid w)\doteq \int _{\mathbb {R} ^{3}}\sum _{i}w_{i}(x)\sigma _{i}(dx)} determines the norm:and inner product for v ∈ V {\displaystyle v\in V} according to
where the integral is calculated by integration by parts for A v {\displaystyle Av} a generalized function A v ∈ V ∗ {\displaystyle Av\in V^{*}} the dual-space.
The differential operator is selected so that the Green's kernel associated to the inverse is sufficiently smooth so that the vector fields support 1-continuous derivative .
The metric on the group of diffeomorphisms is defined by the distance as defined on pairs of elements in the group of diffeomorphisms according to
This distance provides a right-invariant metric of diffeomorphometry, [ 3 ] [ 4 ] [ 5 ] invariant to reparameterization of space since for all φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} ,
The Lie bracket gives the adjustment of the velocity term resulting from a perturbation of the motion in the setting of curved spaces. Using Hamilton's principle of least-action derives the optimizing flows as a critical point for the action integral of the integral of the kinetic energy. The Lie bracket for vector fields in Computational Anatomy was first introduced in Miller, Trouve and Younes. [ 6 ] The derivation calculates the perturbation δ v {\displaystyle \delta v} on the vector fields v ε = v + ε δ v {\displaystyle v^{\varepsilon }=v+\varepsilon \delta v} in terms of the derivative in time of the group perturbation adjusted by the correction of the Lie bracket of vector fields in this function setting involving the Jacobian matrix, unlike the matrix group case:
Proof: Proving Lie bracket of vector fields take a first order perturbation of the flow at point φ ∈ Diff V {\displaystyle \varphi \in \operatorname {Diff} _{V}} .
Taking the first order perturbation gives φ t ε ≐ ( id + ε w ) ∘ φ = φ + ε w ∘ φ {\displaystyle \varphi _{t}^{\varepsilon }\doteq (\operatorname {id} +\varepsilon w)\circ \varphi =\varphi +\varepsilon w\circ \varphi } , with fixed boundary w 0 = w 1 = 0 {\displaystyle w_{0}=w_{1}=0} , with d d t φ t ε = v t ε ∘ φ t ε , φ 0 ε = id , φ 1 ε = φ 1 {\displaystyle {\frac {d}{dt}}\varphi _{t}^{\varepsilon }=v_{t}^{\varepsilon }\circ \varphi _{t}^{\varepsilon },\varphi _{0}^{\varepsilon }=\operatorname {id} ,\varphi _{1}^{\varepsilon }=\varphi _{1}} , giving the following two Eqns:
Equating the above two equations gives the perturbation of the vector field in terms of the Lie bracket adjustment.
The Lie bracket gives the first order variation of the vector field with respect to first order variation of the flow.
The Euler–Lagrange equation can be used to calculate geodesic flows through the group which form the basis for the metric. The action integral for the Lagrangian of the kinetic energy for Hamilton's principle becomes
The action integral in terms of the vector field corresponds to integrating the kinetic energy
The shortest paths geodesic connections in the orbit are defined via Hamilton's Principle of least action requires first order variations of the solutions in the orbits of Computational Anatomy which are based on computing critical points on the metric length or energy of the path.
The original derivation of the Euler equation [ 7 ] associated to the geodesic flow of diffeomorphisms exploits the was a generalized function equation when A v ∈ V ∗ {\displaystyle Av\in V^{*}} is a distribution, or generalized function, take the first order variation of the action integral using the adjoint operator for the Lie bracket ( adjoint-Lie-bracket ) gives for all smooth w ∈ V {\displaystyle w\in V} ,
Using the bracket a d v : w ∈ V ↦ V {\displaystyle ad_{v}:w\in V\mapsto V} and a d v ∗ : V ∗ → V ∗ {\displaystyle ad_{v}^{*}:V^{*}\rightarrow V^{*}} gives
meaning for all smooth w ∈ V , {\displaystyle w\in V,}
Equation ( Euler-general ) is the Euler-equation when diffeomorphic shape momentum is a generalized function. [ 8 ] This equation has been called EPDiff, Euler–Poincare equation for diffeomorphisms and has been studied in the context of fluid mechanics for incompressible fluids with L 2 {\displaystyle L^{2}} metric. [ 9 ] [ 10 ]
In the random orbit model of Computational anatomy , the entire flow is reduced to the initial condition which forms the coordinates encoding the diffeomorphism, as well as providing the means of positioning information in the orbit. This was first terms a geodesic positioning system in Miller, Trouve, and Younes. [ 4 ] From the initial condition v 0 {\displaystyle v_{0}} then geodesic positioning with respect to the Riemannian metric of Computational anatomy solves for the flow of the Euler–Lagrange equation. Solving the geodesic from the initial condition v 0 {\displaystyle v_{0}} is termed the Riemannian-exponential, a mapping Exp id ( ⋅ ) : V → Diff V {\displaystyle \operatorname {Exp} _{\operatorname {id} }(\cdot ):V\to \operatorname {Diff} _{V}} at identity to the group.
The Riemannian exponential satisfies Exp id ( v 0 ) = φ 1 {\displaystyle \operatorname {Exp} _{\operatorname {id} }(v_{0})=\varphi _{1}} for initial condition φ ˙ 0 = v 0 {\displaystyle {\dot {\varphi }}_{0}=v_{0}} , vector field dynamics φ ˙ t = v t ∘ φ t , t ∈ [ 0 , 1 ] {\displaystyle {\dot {\varphi }}_{t}=v_{t}\circ \varphi _{t},t\in [0,1]} ,
It is
extended to the entire group, φ = Exp φ ( v 0 ∘ φ ) ≐ Exp id ( v 0 ) ∘ φ {\displaystyle \varphi =\operatorname {Exp} _{\varphi }(v_{0}\circ \varphi )\doteq \operatorname {Exp} _{\operatorname {id} }(v_{0})\circ \varphi } .
Matching information across coordinate systems is central to computational anatomy . Adding a matching term E : φ ∈ Diff V → R + {\displaystyle E:\varphi \in \operatorname {Diff} _{V}\rightarrow R^{+}} to the action integral of Equation ( Hamilton's action integral )
which represents the target endpoint
The endpoint term adds a boundary condition for the Euler–Lagrange equation ( EL-General )
which gives the Euler equation with boundary term. Taking the variation gives
Proof: [ 11 ] The Proof via variation calculus uses the perturbations from above and classic calculus of variation arguments.
The earliest large deformation diffeomorphic metric mapping ( LDDMM ) algorithms solved matching problems associated to images and registered landmarks. are in a vector spaces. The image matching geodesic equation satisfies the classical dynamical equation with endpoint condition. The necessary conditions for the geodesic for image matching takes the form of the classic Equation ( EL-Classic ) of Euler–Lagrange with boundary condition:
The registered landmark matching problem satisfies the dynamical equation for generalized functions with endpoint condition:
Proof: [ 11 ]
The variation ∂ ∂ φ E ( φ ) {\displaystyle {\frac {\partial }{\partial \varphi }}E(\varphi )} requires variation of the inverse φ − 1 {\displaystyle \varphi ^{-1}} generalizes the matrix perturbation of the inverse via ( φ + ε δ φ ∘ φ ) ∘ ( φ − 1 + ε δ φ − 1 ∘ φ − 1 ) = id + o ( ε ) {\displaystyle (\varphi +\varepsilon \delta \varphi \circ \varphi )\circ (\varphi ^{-1}+\varepsilon \delta \varphi ^{-1}\circ \varphi ^{-1})=\operatorname {id} +o(\varepsilon )} giving δ φ − 1 ∘ φ − 1 = − ( D φ 1 − 1 ) δ φ {\displaystyle \delta \varphi ^{-1}\circ \varphi ^{-1}=-(D\varphi _{1}^{-1})\delta \varphi } giving | https://en.wikipedia.org/wiki/Riemannian_metric_and_Lie_bracket_in_computational_anatomy |
In mathematics, the term Riemann–Hilbert correspondence refers to the correspondence between regular singular flat connections on algebraic vector bundles and representations of the fundamental group, and more generally to one of several generalizations of this.
The original setting appearing in Hilbert's twenty-first problem was for the Riemann sphere, where it was about the existence of systems of linear regular differential equations with prescribed monodromy representations.
First the Riemann sphere may be replaced by an arbitrary Riemann surface and then, in higher dimensions, Riemann surfaces are replaced by complex manifolds of dimension > 1.
There is a correspondence between certain systems of partial differential equations (linear and having very special properties for their solutions) and possible monodromies of their solutions.
Such a result was proved for algebraic connections with regular singularities by Pierre Deligne (1970, generalizing existing work in the case of Riemann surfaces) and more generally for regular holonomic D-modules by Masaki Kashiwara (1980, 1984) and Zoghman Mebkhout (1980, 1984) independently.
In the setting of nonabelian Hodge theory , the Riemann-Hilbert correspondence provides a complex analytic isomorphism between two of the three natural algebraic structures on the moduli spaces, and so is naturally viewed as a nonabelian analogue of the comparison isomorphism between De Rham cohomology and singular/Betti cohomology.
Suppose that X is a smooth complex algebraic variety.
Riemann–Hilbert correspondence (for regular singular connections):
there is a functor Sol called the local solutions functor, that is an equivalence from the category of flat connections on algebraic vector bundles on X with regular singularities to the category of local systems of finite-dimensional complex vector spaces on X . For X connected, the category of local systems is also equivalent to the category of complex representations of the fundamental group of X .
Thus such connections give a purely algebraic way to access the finite dimensional representations of the topological fundamental group.
The condition of regular singularities means that locally constant sections of the bundle (with respect to the flat connection) have moderate growth at points of Y − X , where Y is an algebraic compactification of X . In particular, when X is compact, the condition of regular singularities is vacuous.
More generally there is the
Riemann–Hilbert correspondence (for regular holonomic D-modules): there is a functor DR called the de Rham functor, that is an equivalence from the category of holonomic D-modules on X with regular singularities to the category of perverse sheaves on X .
By considering the irreducible elements of each category, this gives a 1:1 correspondence between isomorphism classes of
and
A D-module is something like a system of differential equations on X , and a local system on a subvariety is something like a description of possible monodromies, so this correspondence can be thought of as describing certain systems of differential equations in terms of the monodromies of their solutions.
In the case X has dimension one (a complex algebraic curve) then there is a more general Riemann–Hilbert correspondence for algebraic connections with no regularity assumption (or for holonomic D-modules with no regularity assumption) described in Malgrange (1991), the Riemann–Hilbert–Birkhoff correspondence .
An example where the theorem applies is the differential equation
on the punctured affine line A 1 − {0} (that is, on the nonzero complex numbers C − {0}). Here a is a fixed complex number. This equation has regular singularities at 0 and ∞ in the projective line P 1 . The local solutions of the equation are of the form cz a for constants c . If a is not an integer, then the function z a cannot be made well-defined on all of C − {0}. That means that the equation has nontrivial monodromy. Explicitly, the monodromy of this equation is the 1-dimensional representation of the fundamental group π 1 ( A 1 − {0}) = Z in which the generator (a loop around the origin) acts by multiplication by e 2 π ia .
To see the need for the hypothesis of regular singularities, consider the differential equation
on the affine line A 1 (that is, on the complex numbers C ). This equation corresponds to a flat connection on the trivial algebraic line bundle over A 1 . The solutions of the equation are of the form ce z for constants c . Since these solutions do not have polynomial growth on some sectors around the point ∞ in the projective line P 1 , the equation does not have regular singularities at ∞. (This can also be seen by rewriting the equation in terms of the variable w := 1/ z , where it becomes
The pole of order 2 in the coefficients means that the equation does not have regular singularities at w = 0, according to Fuchs's theorem .)
Since the functions ce z are defined on the whole affine line A 1 , the monodromy of this flat connection is trivial. But this flat connection is not isomorphic to the obvious flat connection on the trivial line bundle over A 1 (as an algebraic vector bundle with flat connection), because its solutions do not have moderate growth at ∞. This shows the need to restrict to flat connections with regular singularities in the Riemann–Hilbert correspondence. On the other hand, if we work with holomorphic (rather than algebraic) vector bundles with flat connection on a noncompact complex manifold such as A 1 = C , then the notion of regular singularities is not defined. A much more elementary theorem than the Riemann–Hilbert correspondence states that flat connections on holomorphic vector bundles are determined up to isomorphism by their monodromy.
For schemes in characteristic p >0, Emerton & Kisin (2004) (later developed further under less restrictive assumptions in Bhatt & Lurie (2019) ) establish a Riemann-Hilbert correspondence that asserts in particular that étale cohomology of étale sheaves with Z / p -coefficients can be computed in terms of the action of the Frobenius endomorphism on coherent cohomology .
More generally, there are equivalences of categories between constructible (resp. perverse) étale Z / p -sheaves and left (resp. right) modules with a Frobenius (resp. Cartier) action. This can be regarded as the positive characteristic analogue of the classical theory, where one can find a similar interplay of constructive vs. perverse t-structures. | https://en.wikipedia.org/wiki/Riemann–Hilbert_correspondence |
In mathematics , Riemann–Hilbert problems , named after Bernhard Riemann and David Hilbert , are a class of problems that arise in the study of differential equations in the complex plane . Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein , Israel Gohberg and others. [ 1 ]
Suppose that Σ {\displaystyle \Sigma } is a smooth , simple, closed contour in the complex plane . [ 2 ] Divide the plane into two parts denoted by Σ + {\displaystyle \Sigma _{+}} (the inside) and Σ − {\displaystyle \Sigma _{-}} (the outside), determined by the index of the contour with respect to a point. The classical problem, considered in Riemann's PhD dissertation, was that of finding a function
analytic inside Σ + {\displaystyle \Sigma _{+}} , such that the boundary values of M + {\displaystyle M_{+}} along Σ {\displaystyle \Sigma } satisfy the equation
for t ∈ Σ {\displaystyle t\in \Sigma } , where a ( t ) {\displaystyle a(t)} , b ( t ) {\displaystyle b(t)} and c ( t ) {\displaystyle c(t)} are given real-valued functions. [ 3 ] [ 4 ] For example, in the special case where a = 1 , b = 0 {\displaystyle a=1,b=0} and Σ {\displaystyle \Sigma } is a circle, the problem reduces to deriving the Poisson formula . [ 5 ]
By the Riemann mapping theorem , it suffices to consider the case when Σ {\displaystyle \Sigma } is the circle group T = { z ∈ C : | z | = 1 } {\textstyle \mathbb {T} =\{z\in \mathbb {C} :|z|=1\}} . [ 6 ] In this case, one may seek M + ( z ) {\displaystyle M_{+}(z)} along with its Schwarz reflection
For z ∈ T {\displaystyle z\in \mathbb {T} } , one has z = 1 / z ¯ {\displaystyle z=1/{\bar {z}}} and so
Hence the problem reduces to finding a pair of analytic functions M + ( z ) {\displaystyle M_{+}(z)} and M − ( z ) {\displaystyle M_{-}(z)} on the inside and outside, respectively, of the unit disk , so that on the unit circle
and, moreover, so that the condition at infinity holds:
Hilbert's generalization of the problem attempted to find a pair of analytic functions M + ( t ) {\displaystyle M_{+}(t)} and M − ( t ) {\displaystyle M_{-}(t)} on the inside and outside, respectively, of the curve Σ {\displaystyle \Sigma } , such that for t ∈ Σ {\displaystyle t\in \Sigma } one has
where α ( t ) {\displaystyle \alpha (t)} , β ( t ) {\displaystyle \beta (t)} and γ ( t ) {\displaystyle \gamma (t)} are given complex-valued functions (no longer just complex conjugates). [ 7 ]
In the Riemann problem as well as Hilbert's generalization, the contour Σ {\displaystyle \Sigma } was simple. A full Riemann–Hilbert problem allows that the contour may be composed of a union of several oriented smooth curves, with no intersections. The "+" and "−" sides of the "contour" may then be determined according to the index of a point with respect to Σ {\displaystyle \Sigma } . The Riemann–Hilbert problem is to find a pair of analytic functions M + ( t ) {\displaystyle M_{+}(t)} and M − ( t ) {\displaystyle M_{-}(t)} on the "+" and "−" side of Σ {\displaystyle \Sigma } , respectively, such that for t ∈ Σ {\displaystyle t\in \Sigma } one has
where α ( t ) {\displaystyle \alpha (t)} , β ( t ) {\displaystyle \beta (t)} and γ ( t ) {\displaystyle \gamma (t)} are given complex-valued functions.
Given an oriented contour Σ {\displaystyle \Sigma } (technically: an oriented union of smooth curves without points of infinite self-intersection in the complex plane), a Riemann–Hilbert factorization problem is the following.
Given a matrix function G ( t ) {\displaystyle G(t)} defined on the contour Σ {\displaystyle \Sigma } , find a holomorphic matrix function M ( z ) {\displaystyle M(z)} defined on the complement of Σ {\displaystyle \Sigma } , such that the following two conditions are satisfied [ 8 ]
In the simplest case G ( t ) {\displaystyle G(t)} is smooth and integrable. In more complicated cases it could have singularities. The limits M + {\displaystyle M_{+}} and M − {\displaystyle M_{-}} could be classical and continuous or they could be taken in the L 2 {\displaystyle L^{2}} -sense .
At end-points or intersection points of the contour Σ {\displaystyle \Sigma } , the jump condition is not defined; constraints on the growth of M {\displaystyle M} near those points have to be posed to ensure uniqueness (see the scalar problem below).
Suppose G = 2 {\displaystyle G=2} and Σ = [ − 1 , 1 ] {\displaystyle \Sigma =[-1,1]} . Assuming M {\displaystyle M} is bounded, what is the solution M {\displaystyle M} ?
To solve this, let's take the logarithm of equation M + = G M − {\displaystyle M_{+}=GM_{-}} .
Since M ( z ) {\displaystyle M(z)} tends to 1 {\displaystyle 1} , log M → 0 {\displaystyle \log M\to 0} as z → ∞ {\displaystyle z\to \infty } .
A standard fact about the Cauchy transform is that C + − C − = I {\displaystyle C_{+}-C_{-}=I} where C + {\displaystyle C_{+}} and C − {\displaystyle C_{-}} are the limits of the Cauchy transform from above and below Σ {\displaystyle \Sigma } ; therefore, we get
when z ∈ Σ {\displaystyle z\in \Sigma } . Because the solution of a Riemann–Hilbert factorization problem is unique (an easy application of Liouville's theorem (complex analysis) ), the Sokhotski–Plemelj theorem gives the solution. We get
and therefore
which has a branch cut at contour Σ {\displaystyle \Sigma } .
Check:
therefore,
CAVEAT 1: If the problem is not scalar one cannot easily take logarithms. In general explicit solutions are very rare.
CAVEAT 2: The boundedness (or at least a constraint on the blow-up) of M {\displaystyle M} near the special points 1 {\displaystyle 1} and − 1 {\displaystyle -1} is crucial. Otherwise any function of the form
is also a solution. In general, conditions on growth are necessary at special points (the end-points of the jump contour or intersection point) to ensure that the problem is well-posed.
Suppose D {\displaystyle D} is some simply connected domain of the complex z {\displaystyle z} plane . Then the scalar equation
is a generalization of a Riemann-Hilbert problem, called the DBAR problem (or ∂ ¯ {\displaystyle {\overline {\partial }}} problem ). It is the complex form of the nonhomogeneous Cauchy-Riemann equations . To show this, let
with u ( x , y ) {\displaystyle u(x,y)} , v ( x , y ) {\displaystyle v(x,y)} , g ( x , y ) {\displaystyle g(x,y)} and h ( x , y ) {\displaystyle h(x,y)} all real-valued functions of real variables x {\displaystyle x} and y {\displaystyle y} . Then, using
the DBAR problem yields
As such, if M {\displaystyle M} is holomorphic for z ∈ D {\displaystyle z\in D} , then the Cauchy-Riemann equations must be satisfied. [ 9 ]
In case M → 1 {\displaystyle M\to 1} as z → ∞ {\displaystyle z\to \infty } and D := C {\displaystyle D:=\mathbb {C} } , the solution of the DBAR problem is [ 10 ]
integrated over the entire complex plane; denoted by R 2 {\displaystyle \mathbb {R} ^{2}} , and where the wedge product is defined as
If a function M ( z ) {\displaystyle M(z)} is holomorphic in some complex region R {\displaystyle R} , then
in R {\displaystyle R} . For generalized analytic functions, this equation is replaced by
in a region R {\displaystyle R} , where M ¯ {\displaystyle {\overline {M}}} is the complex conjugate of M {\displaystyle M} and A ( z , z ¯ ) {\displaystyle A(z,{\bar {z}})} and B ( z , z ¯ ) {\displaystyle B(z,{\bar {z}})} are functions of z {\displaystyle z} and z ¯ {\displaystyle {\bar {z}}} . [ 11 ]
Generalized analytic functions have applications in differential geometry , in solving certain type of multidimensional nonlinear partial differential equations and multidimensional inverse scattering . [ 12 ]
Riemann–Hilbert problems have applications to several related classes of problems.
The numerical analysis of Riemann–Hilbert problems can provide an effective way for numerically solving integrable PDEs (see e.g. Trogdon & Olver (2016) ).
In particular, Riemann–Hilbert factorization problems are used to extract asymptotic values for the three problems above (say, as time goes to infinity, or as the dispersion coefficient goes to zero, or as the polynomial degree goes to infinity, or as the size of the permutation goes to infinity). There exists a method for extracting the asymptotic behavior of solutions of Riemann–Hilbert problems, analogous to the method of stationary phase and the method of steepest descent applicable to exponential integrals.
By analogy with the classical asymptotic methods, one "deforms" Riemann–Hilbert problems which are not explicitly solvable to problems that are. The so-called "nonlinear" method of stationary phase is due to Deift & Zhou (1993) , expanding on a previous idea by Its (1982) and Manakov (1974) and using technical background results from Beals & Coifman (1984) and Zhou (1989) . A crucial ingredient of the Deift–Zhou analysis is the asymptotic analysis of singular integrals on contours. The relevant kernel is the standard Cauchy kernel (see Gakhov (2001) ; also cf. the scalar example below).
An essential extension of the nonlinear method of stationary phase has been the introduction of the so-called finite gap g-function transformation by Deift, Venakides & Zhou (1997) , which has been crucial in most applications. This was inspired by work of Lax, Levermore and Venakides, who reduced the analysis of the small dispersion limit of the KdV equation to the analysis of a maximization problem for a logarithmic potential under some external field: a variational problem of "electrostatic" type (see Lax & Levermore (1983) ). The g-function is the logarithmic transform of the maximizing "equilibrium" measure. The analysis of the small dispersion limit of KdV equation has in fact provided the basis for the analysis of most of the work concerning "real" orthogonal polynomials (i.e. with the orthogonality condition defined on the real line) and Hermitian random matrices.
Perhaps the most sophisticated extension of the theory so far is the one applied to the "non self-adjoint" case, i.e. when the underlying Lax operator (the first component of the Lax pair ) is not self-adjoint , by Kamvissis, McLaughlin & Miller (2003) . In that case, actual "steepest descent contours" are defined and computed. The corresponding variational problem is a max-min problem: one looks for a contour that minimizes the "equilibrium" measure. The study of the variational problem and the proof of existence of a regular solution, under some conditions on the external field, was done in Kamvissis & Rakhmanov (2005) ; the contour arising is an "S-curve", as defined and studied in the 1980s by Herbert R. Stahl, Andrei A. Gonchar and Evguenii A Rakhmanov.
An alternative asymptotic analysis of Riemann–Hilbert factorization problems is provided in McLaughlin & Miller (2006) , especially convenient when jump matrices do not have analytic extensions. Their method is based on the analysis of d-bar problems, rather than the asymptotic analysis of singular integrals on contours. An alternative way of dealing with jump matrices with no analytic extensions was introduced in Varzugin (1996) .
Another extension of the theory appears in Kamvissis & Teschl (2012) where the underlying space of the Riemann–Hilbert problem is a compact hyperelliptic Riemann surface . The correct factorization problem is no more holomorphic, but rather meromorphic , by reason of the Riemann–Roch theorem . The related singular kernel is not the usual Cauchy kernel, but rather a more general kernel involving meromorphic differentials defined naturally on the surface (see e.g. the appendix in Kamvissis & Teschl (2012) ). The Riemann–Hilbert problem deformation theory is applied to the problem of stability of the infinite periodic Toda lattice under a "short range" perturbation (for example a perturbation of a finite number of particles).
Most Riemann–Hilbert factorization problems studied in the literature are 2-dimensional, i.e., the unknown matrices are of dimension 2. Higher-dimensional problems have been studied by Arno Kuijlaars and collaborators, see e.g. Kuijlaars & López (2015) . | https://en.wikipedia.org/wiki/Riemann–Hilbert_problem |
In mathematics , the Riemann–Lebesgue lemma , named after Bernhard Riemann and Henri Lebesgue , states that the Fourier transform or Laplace transform of an L 1 function vanishes at infinity . It is of importance in harmonic analysis and asymptotic analysis .
Let f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} be an integrable function, i.e. f : R n → C {\displaystyle f\colon \mathbb {R} ^{n}\rightarrow \mathbb {C} } is a measurable function such that
and let f ^ {\displaystyle {\hat {f}}} be the Fourier transform of f {\displaystyle f} , i.e.
Then f ^ {\displaystyle {\hat {f}}} vanishes at infinity: | f ^ ( ξ ) | → 0 {\displaystyle |{\hat {f}}(\xi )|\to 0} as | ξ | → ∞ {\displaystyle |\xi |\to \infty } .
Because the Fourier transform of an integrable function is continuous, the Fourier transform f ^ {\displaystyle {\hat {f}}} is a continuous function vanishing at infinity. If C 0 ( R n ) {\displaystyle C_{0}(\mathbb {R} ^{n})} denotes the vector space of continuous functions vanishing at infinity, the Riemann–Lebesgue lemma may be formulated as follows: The Fourier transformation maps L 1 ( R n ) {\displaystyle L^{1}(\mathbb {R} ^{n})} to C 0 ( R n ) {\displaystyle C_{0}(\mathbb {R} ^{n})} .
We will focus on the one-dimensional case n = 1 {\displaystyle n=1} , the proof in higher dimensions is similar. First, suppose that f {\displaystyle f} is continuous and compactly supported . For ξ ≠ 0 {\displaystyle \xi \neq 0} , the substitution x → x + π ξ {\displaystyle \textstyle x\to x+{\frac {\pi }{\xi }}} leads to
This gives a second formula for f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} . Taking the mean of both formulas, we arrive at the following estimate:
Because f {\displaystyle f} is continuous, | f ( x ) − f ( x + π ξ ) | {\displaystyle \left|f(x)-f\left(x+{\tfrac {\pi }{\xi }}\right)\right|} converges to 0 {\displaystyle 0} as | ξ | → ∞ {\displaystyle |\xi |\to \infty } for all x ∈ R {\displaystyle x\in \mathbb {R} } . Thus, | f ^ ( ξ ) | {\displaystyle |{\hat {f}}(\xi )|} converges to 0 as | ξ | → ∞ {\displaystyle |\xi |\to \infty } due to the dominated convergence theorem .
If f {\displaystyle f} is an arbitrary integrable function, it may be approximated in the L 1 {\displaystyle L^{1}} norm by a compactly supported continuous function. For ε > 0 {\displaystyle \varepsilon >0} , pick a compactly supported continuous function g {\displaystyle g} such that ‖ f − g ‖ L 1 ≤ ε {\displaystyle \|f-g\|_{L^{1}}\leq \varepsilon } . Then
Because this holds for any ε > 0 {\displaystyle \varepsilon >0} , it follows that | f ^ ( ξ ) | → 0 {\displaystyle |{\hat {f}}(\xi )|\to 0} as | ξ | → ∞ {\displaystyle |\xi |\to \infty } .
The Riemann–Lebesgue lemma holds in a variety of other situations.
The Riemann–Lebesgue lemma can be used to prove the validity of asymptotic approximations for integrals. Rigorous treatments of the method of steepest descent and the method of stationary phase , amongst others, are based on the Riemann–Lebesgue lemma. | https://en.wikipedia.org/wiki/Riemann–Lebesgue_lemma |
In mathematics , the Riemann–Liouville integral associates with a real function f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } another function I α f of the same kind for each value of the parameter α > 0 . The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α , I α f is an iterated antiderivative of f of order α . The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville , the latter of whom was the first to consider the possibility of fractional calculus in 1832. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The operator agrees with the Euler transform , after Leonhard Euler , when applied to analytic functions . [ 5 ] It was generalized to arbitrary dimensions by Marcel Riesz , who introduced the Riesz potential .
The Riemann-Liouville integral is motivated from Cauchy formula for repeated integration. For a function f continuous on the interval [ a , x ], the Cauchy formula for n -fold repeated integration states that
I n f ( x ) = f ( − n ) ( x ) = 1 ( n − 1 ) ! ∫ a x ( x − t ) n − 1 f ( t ) d t . {\displaystyle I^{n}f(x)=f^{(-n)}(x)={\frac {1}{(n-1)!}}\int _{a}^{x}\left(x-t\right)^{n-1}f(t)\,\mathrm {d} t.}
Now, this formula can be generalized to any positive real number by replacing positive integer n with α , Therefore we obtain the definition of Riemann-Liouville fractional Integral by
The Riemann–Liouville integral is defined by
where Γ is the gamma function and a is an arbitrary but fixed base point. The integral is well-defined provided f is a locally integrable function , and α is a complex number in the half-plane Re( α ) > 0 . The dependence on the base-point a is often suppressed, and represents a freedom in constant of integration . Clearly I 1 f is an antiderivative of f (of first order), and for positive integer values of α , I α f is an antiderivative of order α by Cauchy formula for repeated integration . Another notation, which emphasizes the base point, is [ 6 ]
This also makes sense if a = −∞ , with suitable restrictions on f .
The fundamental relations hold
the latter of which is a semigroup property. [ 1 ] These properties make possible not only the definition of fractional integration, but also of fractional differentiation, by taking enough derivatives of I α f .
Fix a bounded interval ( a , b ) . The operator I α associates to each integrable function f on ( a , b ) the function I α f on ( a , b ) which is also integrable by Fubini's theorem . Thus I α defines a linear operator on L 1 ( a , b ) :
Fubini's theorem also shows that this operator is continuous with respect to the Banach space structure on L 1 , and that the following inequality holds:
Here ‖ · ‖ 1 denotes the norm on L 1 ( a , b ) .
More generally, by Hölder's inequality , it follows that if f ∈ L p ( a , b ) , then I α f ∈ L p ( a , b ) as well, and the analogous inequality holds
where ‖ · ‖ p is the L p norm on the interval ( a , b ) . Thus we have a bounded linear operator I α : L p ( a , b ) → L p ( a , b ) . Furthermore, I α f → f in the L p sense as α → 0 along the real axis. That is
for all p ≥ 1 . Moreover, by estimating the maximal function of I , one can show that the limit I α f → f holds pointwise almost everywhere .
The operator I α is well-defined on the set of locally integrable function on the whole real line R {\displaystyle \mathbb {R} } . It defines a bounded transformation on any of the Banach spaces of functions of exponential type X σ = L 1 ( e − σ | t | d t ) , {\displaystyle X_{\sigma }=L^{1}(e^{-\sigma |t|}dt),} consisting of locally integrable functions for which the norm
is finite. For f ∈ X σ , the Laplace transform of I α f takes the particularly simple form
for Re( s ) > σ . Here F ( s ) denotes the Laplace transform of f , and this property expresses that I α is a Fourier multiplier .
One can define fractional-order derivatives of f as well by
where ⌈ · ⌉ denotes the ceiling function . One also obtains a differintegral interpolating between differentiation and integration by defining
An alternative fractional derivative was introduced by Caputo in 1967, [ 7 ] and produces a derivative that has different properties: it produces zero from constant functions and, more importantly, the initial value terms of the Laplace Transform are expressed by means of the values of that function and of its derivative of integer order rather than the derivatives of fractional order as in the Riemann–Liouville derivative. [ 8 ] The Caputo fractional derivative with base point x , is then:
Another representation is:
Let us assume that f ( x ) is a monomial of the form
The first derivative is as usual
Repeating this gives the more general result that
which, after replacing the factorials with the gamma function , leads to
For k = 1 and a = 1 / 2 , we obtain the half-derivative of the function x ↦ x {\displaystyle x\mapsto x} as
To demonstrate that this is, in fact, the "half derivative" (where H 2 f ( x ) = Df ( x ) ), we repeat the process to get:
(because Γ ( 3 2 ) = π 2 {\textstyle \Gamma \!\left({\frac {3}{2}}\right)={\frac {\sqrt {\pi }}{2}}} and Γ(1) = 1 ) which is indeed the expected result of
For negative integer power k , 1/ Γ {\textstyle \Gamma } is 0, so it is convenient to use the following relation: [ 9 ]
This extension of the above differential operator need not be constrained only to real powers; it also applies for complex powers. For example, the (1 + i ) -th derivative of the (1 − i ) -th derivative yields the second derivative. Also setting negative values for a yields integrals.
For a general function f ( x ) and 0 < α < 1 , the complete fractional derivative is
For arbitrary α , since the gamma function is infinite for negative (real) integers, it is necessary to apply the fractional derivative after the integer derivative has been performed. For example,
We can also come at the question via the Laplace transform . Knowing that
and
and so on, we assert
For example,
as expected. Indeed, given the convolution rule
and shorthanding p ( x ) = x α − 1 for clarity, we find that
which is what Cauchy gave us above.
Laplace transforms "work" on relatively few functions, but they are often useful for solving fractional differential equations. | https://en.wikipedia.org/wiki/Riemann–Liouville_integral |
The Riemschneider thiocarbamate synthesis converts alkyl or aryl thiocyanates to thiocarbamates under acidic conditions, followed by hydrolysis with ice water. [ 1 ] The reaction was discovered by the German chemist Randolph Riemschneider [ de ] in 1951 as a more efficient method to produce thiocarbamates. [ 2 ] Some references spell the name Riemenschneider .
The Riemschneider reaction can also be used to create the corresponding N -substituted thiocarbamate from an alcohol or alkene . [ 3 ]
The mechanism for the conversion of an alcohol to the N-substituted thiocarbamate is shown below. [ 4 ] The reaction proceeds under acidic conditions. The alcohol accepts a hydrogen ion from sulfuric acid to form a water, which then leaves, creating a carbocation . The mesomeric form of the cyanogroup reacts with the carbocation. The carbocation is attacked by a water, which then loses a hydrogen to form the product. The product then undergoes hydrolysis to form the N-substituted thiocarbamate.
The reaction requires the formation of a carbocation and does not work for primary alcohols. Only secondary and tertiary alcohols undergo the Riemschneider reaction.
The Riemschneider thiocarbamate synthesis for aromatic compounds does not work efficiently for ortho-substituted compounds such as ortho-carboxy, ortho-methoxy or ortho-nitro derivative compounds. The reaction is also not as efficient for compounds that are sensitive to concentrated acid, such as thiocyanophenols. The reaction works well for other compounds. Various thiocyanate compounds underwent the Riemschneider synthesis to form thiocarbamates, and all had melting points similar to the predicted value. [ 5 ]
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Riemschneider_thiocarbamate_synthesis |
Rien John Schuurhuis (born 12 August 1982) is a Dutch -born Vatican road cyclist and industrial design engineer who competed for the Vatican City in the 2022 , 2023 and 2024 UCI Road World Championships . He was the first cyclist and athlete who represented the Vatican City as a regular scoring competitor. [ 2 ]
Schuurhuis started competing internationally for teams such as Oliver's Real Food Racing and Blank Inc Cycling Team under a Dutch sporting nationality.
In 2020 when he moved to Rome , he was "immediately drawn to the values and community spirit of Athletica Vaticana ", and in 2021 he started competing for the Vatican City . [ 3 ]
Schuurhuis was introduced to cycling at a young age saying he "could ride a bike before I could walk," due to the Cycling culture of the Netherlands . [ 2 ]
Schuurhuis holds Dutch citizenship due to his birth, Australian citizenship as his wife Chiara Porro is Australian, and Vatican citizenship since he resides there and as his wife is the Australian Ambassador to the Holy See . [ 2 ] [ 4 ] He moved to Rome in 2020. [ 3 ] He has two children, Thomas and George. [ 5 ] | https://en.wikipedia.org/wiki/Rien_Schuurhuis |
In mathematics , the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz . In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable.
If 0 < α < n , then the Riesz potential I α f of a locally integrable function f on R n is the function defined by
where the constant is given by
This singular integral is well-defined provided f decays sufficiently rapidly at infinity, specifically if f ∈ L p ( R n ) with 1 ≤ p < n / α . The classical result due to Sobolev states that the rate of decay of f and that of I α f are related in the form of an inequality (the Hardy–Littlewood–Sobolev inequality )
For p =1 the result was extended by ( Schikorra, Spector & Van Schaftingen 2014 ),
where R f = D I 1 f {\displaystyle Rf=DI_{1}f} is the vector-valued Riesz transform . More generally, the operators I α are well-defined for complex α such that 0 < Re α < n .
The Riesz potential can be defined more generally in a weak sense as the convolution
where K α is the locally integrable function:
The Riesz potential can therefore be defined whenever f is a compactly supported distribution. In this connection, the Riesz potential of a positive Borel measure μ with compact support is chiefly of interest in potential theory because I α μ is then a (continuous) subharmonic function off the support of μ, and is lower semicontinuous on all of R n .
Consideration of the Fourier transform reveals that the Riesz potential is a Fourier multiplier . [ 1 ] In fact, one has
and so, by the convolution theorem ,
The Riesz potentials satisfy the following semigroup property on, for instance, rapidly decreasing continuous functions
provided
Furthermore, if 0 < Re α < n –2 , then
One also has, for this class of functions, | https://en.wikipedia.org/wiki/Riesz_potential |
In mathematics , the Riesz rearrangement inequality , sometimes called Riesz–Sobolev inequality, states that any three non-negative functions f : R n → R + {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{+}} , g : R n → R + {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{+}} and h : R n → R + {\displaystyle h:\mathbb {R} ^{n}\to \mathbb {R} ^{+}} satisfy the inequality
where f ∗ : R n → R + {\displaystyle f^{*}:\mathbb {R} ^{n}\to \mathbb {R} ^{+}} , g ∗ : R n → R + {\displaystyle g^{*}:\mathbb {R} ^{n}\to \mathbb {R} ^{+}} and h ∗ : R n → R + {\displaystyle h^{*}:\mathbb {R} ^{n}\to \mathbb {R} ^{+}} are the symmetric decreasing rearrangements of the functions f {\displaystyle f} , g {\displaystyle g} and h {\displaystyle h} respectively.
The inequality was first proved by Frigyes Riesz in 1930, [ 1 ] and independently reproved by S.L.Sobolev in 1938. Brascamp, Lieb and Luttinger have shown that it can be generalized to arbitrarily (but finitely) many functions acting on arbitrarily many variables. [ 2 ]
The Riesz rearrangement inequality can be used to prove the Pólya–Szegő inequality .
In the one-dimensional case, the inequality is first proved when the functions f {\displaystyle f} , g {\displaystyle g} and h {\displaystyle h} are characteristic functions of a finite unions of intervals. Then the inequality can be extended to characteristic functions of measurable sets, to measurable functions taking a finite number of values and finally to nonnegative measurable functions. [ 3 ]
In order to pass from the one-dimensional case to the higher-dimensional case, the spherical rearrangement is approximated by Steiner symmetrization for which the one-dimensional argument applies directly by Fubini's theorem. [ 4 ]
In the case where any one of the three functions is a strictly symmetric-decreasing function, equality holds only when the other two functions are equal, up to translation, to their symmetric-decreasing rearrangements. [ 5 ] | https://en.wikipedia.org/wiki/Riesz_rearrangement_inequality |
The Riesz representation theorem , sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet , establishes an important connection between a Hilbert space and its continuous dual space . If the underlying field is the real numbers , the two are isometrically isomorphic ; if the underlying field is the complex numbers , the two are isometrically anti-isomorphic . The (anti-) isomorphism is a particular natural isomorphism .
Let H {\displaystyle H} be a Hilbert space over a field F , {\displaystyle \mathbb {F} ,} where F {\displaystyle \mathbb {F} } is either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .} If F = C {\displaystyle \mathbb {F} =\mathbb {C} } (resp. if F = R {\displaystyle \mathbb {F} =\mathbb {R} } ) then H {\displaystyle H} is called a complex Hilbert space (resp. a real Hilbert space ). Every real Hilbert space can be extended to be a dense subset of a unique (up to bijective isometry ) complex Hilbert space, called its complexification , which is why Hilbert spaces are often automatically assumed to be complex. Real and complex Hilbert spaces have in common many, but by no means all, properties and results/theorems.
This article is intended for both mathematicians and physicists and will describe the theorem for both.
In both mathematics and physics, if a Hilbert space is assumed to be real (that is, if F = R {\displaystyle \mathbb {F} =\mathbb {R} } ) then this will usually be made clear. Often in mathematics, and especially in physics, unless indicated otherwise, "Hilbert space" is usually automatically assumed to mean "complex Hilbert space." Depending on the author, in mathematics, "Hilbert space" usually means either (1) a complex Hilbert space, or (2) a real or complex Hilbert space.
By definition, an antilinear map (also called a conjugate-linear map ) f : H → Y {\displaystyle f:H\to Y} is a map between vector spaces that is additive : f ( x + y ) = f ( x ) + f ( y ) for all x , y ∈ H , {\displaystyle f(x+y)=f(x)+f(y)\quad {\text{ for all }}x,y\in H,} and antilinear (also called conjugate-linear or conjugate-homogeneous ): f ( c x ) = c ¯ f ( x ) for all x ∈ H and all scalar c ∈ F , {\displaystyle f(cx)={\overline {c}}f(x)\quad {\text{ for all }}x\in H{\text{ and all scalar }}c\in \mathbb {F} ,} where c ¯ {\displaystyle {\overline {c}}} is the conjugate of the complex number c = a + b i {\displaystyle c=a+bi} , given by c ¯ = a − b i {\displaystyle {\overline {c}}=a-bi} .
In contrast, a map f : H → Y {\displaystyle f:H\to Y} is linear if it is additive and homogeneous : f ( c x ) = c f ( x ) for all x ∈ H and all scalars c ∈ F . {\displaystyle f(cx)=cf(x)\quad {\text{ for all }}x\in H\quad {\text{ and all scalars }}c\in \mathbb {F} .}
Every constant 0 {\displaystyle 0} map is always both linear and antilinear. If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then the definitions of linear maps and antilinear maps are completely identical. A linear map from a Hilbert space into a Banach space (or more generally, from any Banach space into any topological vector space ) is continuous if and only if it is bounded ; the same is true of antilinear maps. The inverse of any antilinear (resp. linear) bijection is again an antilinear (resp. linear) bijection. The composition of two anti linear maps is a linear map.
Continuous dual and anti-dual spaces
A functional on H {\displaystyle H} is a function H → F {\displaystyle H\to \mathbb {F} } whose codomain is the underlying scalar field F . {\displaystyle \mathbb {F} .} Denote by H ∗ {\displaystyle H^{*}} (resp. by H ¯ ∗ ) {\displaystyle {\overline {H}}^{*})} the set of all continuous linear (resp. continuous antilinear) functionals on H , {\displaystyle H,} which is called the (continuous) dual space (resp. the (continuous) anti-dual space ) of H . {\displaystyle H.} [ 1 ] If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then linear functionals on H {\displaystyle H} are the same as antilinear functionals and consequently, the same is true for such continuous maps: that is, H ∗ = H ¯ ∗ . {\displaystyle H^{*}={\overline {H}}^{*}.}
One-to-one correspondence between linear and antilinear functionals
Given any functional f : H → F , {\displaystyle f~:~H\to \mathbb {F} ,} the conjugate of f {\displaystyle f} is the functional f ¯ : H → F h ↦ f ( h ) ¯ . {\displaystyle {\begin{alignedat}{4}{\overline {f}}:\,&H&&\to \,&&\mathbb {F} \\&h&&\mapsto \,&&{\overline {f(h)}}.\\\end{alignedat}}}
This assignment is most useful when F = C {\displaystyle \mathbb {F} =\mathbb {C} } because if F = R {\displaystyle \mathbb {F} =\mathbb {R} } then f = f ¯ {\displaystyle f={\overline {f}}} and the assignment f ↦ f ¯ {\displaystyle f\mapsto {\overline {f}}} reduces down to the identity map .
The assignment f ↦ f ¯ {\displaystyle f\mapsto {\overline {f}}} defines an antilinear bijective correspondence from the set of
onto the set of
The Hilbert space H {\displaystyle H} has an associated inner product H × H → F {\displaystyle H\times H\to \mathbb {F} } valued in H {\displaystyle H} 's underlying scalar field F {\displaystyle \mathbb {F} } that is linear in one coordinate and antilinear in the other (as specified below).
If H {\displaystyle H} is a complex Hilbert space ( F = C {\displaystyle \mathbb {F} =\mathbb {C} } ), then there is a crucial difference between the notations prevailing in mathematics versus physics, regarding which of the two variables is linear.
However, for real Hilbert spaces ( F = R {\displaystyle \mathbb {F} =\mathbb {R} } ), the inner product is a symmetric map that is linear in each coordinate ( bilinear ), so there can be no such confusion.
In mathematics , the inner product on a Hilbert space H {\displaystyle H} is often denoted by ⟨ ⋅ , ⋅ ⟩ {\displaystyle \left\langle \cdot \,,\cdot \right\rangle } or ⟨ ⋅ , ⋅ ⟩ H {\displaystyle \left\langle \cdot \,,\cdot \right\rangle _{H}} while in physics , the bra–ket notation ⟨ ⋅ ∣ ⋅ ⟩ {\displaystyle \left\langle \cdot \mid \cdot \right\rangle } or ⟨ ⋅ ∣ ⋅ ⟩ H {\displaystyle \left\langle \cdot \mid \cdot \right\rangle _{H}} is typically used. In this article, these two notations will be related by the equality:
⟨ x , y ⟩ := ⟨ y ∣ x ⟩ for all x , y ∈ H . {\displaystyle \left\langle x,y\right\rangle :=\left\langle y\mid x\right\rangle \quad {\text{ for all }}x,y\in H.} These have the following properties:
In computations, one must consistently use either the mathematics notation ⟨ ⋅ , ⋅ ⟩ {\displaystyle \left\langle \cdot \,,\cdot \right\rangle } , which is (linear, antilinear); or the physics notation ⟨ ⋅ ∣ ⋅ ⟩ {\displaystyle \left\langle \cdot \mid \cdot \right\rangle } , which is (antilinear | linear).
If x = y {\displaystyle x=y} then ⟨ x ∣ x ⟩ = ⟨ x , x ⟩ {\displaystyle \langle \,x\mid x\,\rangle =\langle \,x,x\,\rangle } is a non-negative real number and the map ‖ x ‖ := ⟨ x , x ⟩ = ⟨ x ∣ x ⟩ {\displaystyle \|x\|:={\sqrt {\langle x,x\rangle }}={\sqrt {\langle x\mid x\rangle }}}
defines a canonical norm on H {\displaystyle H} that makes H {\displaystyle H} into a normed space . [ 1 ] As with all normed spaces, the (continuous) dual space H ∗ {\displaystyle H^{*}} carries a canonical norm, called the dual norm , that is defined by [ 1 ] ‖ f ‖ H ∗ := sup ‖ x ‖ ≤ 1 , x ∈ H | f ( x ) | for every f ∈ H ∗ . {\displaystyle \|f\|_{H^{*}}~:=~\sup _{\|x\|\leq 1,x\in H}|f(x)|\quad {\text{ for every }}f\in H^{*}.}
The canonical norm on the (continuous) anti-dual space H ¯ ∗ , {\displaystyle {\overline {H}}^{*},} denoted by ‖ f ‖ H ¯ ∗ , {\displaystyle \|f\|_{{\overline {H}}^{*}},} is defined by using this same equation: [ 1 ] ‖ f ‖ H ¯ ∗ := sup ‖ x ‖ ≤ 1 , x ∈ H | f ( x ) | for every f ∈ H ¯ ∗ . {\displaystyle \|f\|_{{\overline {H}}^{*}}~:=~\sup _{\|x\|\leq 1,x\in H}|f(x)|\quad {\text{ for every }}f\in {\overline {H}}^{*}.}
This canonical norm on H ∗ {\displaystyle H^{*}} satisfies the parallelogram law , which means that the polarization identity can be used to define a canonical inner product on H ∗ , {\displaystyle H^{*},} which this article will denote by the notations ⟨ f , g ⟩ H ∗ := ⟨ g ∣ f ⟩ H ∗ , {\displaystyle \left\langle f,g\right\rangle _{H^{*}}:=\left\langle g\mid f\right\rangle _{H^{*}},} where this inner product turns H ∗ {\displaystyle H^{*}} into a Hilbert space. There are now two ways of defining a norm on H ∗ : {\displaystyle H^{*}:} the norm induced by this inner product (that is, the norm defined by f ↦ ⟨ f , f ⟩ H ∗ {\displaystyle f\mapsto {\sqrt {\left\langle f,f\right\rangle _{H^{*}}}}} ) and the usual dual norm (defined as the supremum over the closed unit ball). These norms are the same; explicitly, this means that the following holds for every f ∈ H ∗ : {\displaystyle f\in H^{*}:} sup ‖ x ‖ ≤ 1 , x ∈ H | f ( x ) | = ‖ f ‖ H ∗ = ⟨ f , f ⟩ H ∗ = ⟨ f ∣ f ⟩ H ∗ . {\displaystyle \sup _{\|x\|\leq 1,x\in H}|f(x)|=\|f\|_{H^{*}}~=~{\sqrt {\langle f,f\rangle _{H^{*}}}}~=~{\sqrt {\langle f\mid f\rangle _{H^{*}}}}.}
As will be described later, the Riesz representation theorem can be used to give an equivalent definition of the canonical norm and the canonical inner product on H ∗ . {\displaystyle H^{*}.}
The same equations that were used above can also be used to define a norm and inner product on H {\displaystyle H} 's anti-dual space H ¯ ∗ . {\displaystyle {\overline {H}}^{*}.} [ 1 ]
Canonical isometry between the dual and antidual
The complex conjugate f ¯ {\displaystyle {\overline {f}}} of a functional f , {\displaystyle f,} which was defined above, satisfies ‖ f ‖ H ∗ = ‖ f ¯ ‖ H ¯ ∗ and ‖ g ¯ ‖ H ∗ = ‖ g ‖ H ¯ ∗ {\displaystyle \|f\|_{H^{*}}~=~\left\|{\overline {f}}\right\|_{{\overline {H}}^{*}}\quad {\text{ and }}\quad \left\|{\overline {g}}\right\|_{H^{*}}~=~\|g\|_{{\overline {H}}^{*}}} for every f ∈ H ∗ {\displaystyle f\in H^{*}} and every g ∈ H ¯ ∗ . {\displaystyle g\in {\overline {H}}^{*}.} This says exactly that the canonical antilinear bijection defined by Cong : H ∗ → H ¯ ∗ f ↦ f ¯ {\displaystyle {\begin{alignedat}{4}\operatorname {Cong} :\;&&H^{*}&&\;\to \;&{\overline {H}}^{*}\\[0.3ex]&&f&&\;\mapsto \;&{\overline {f}}\\\end{alignedat}}} as well as its inverse Cong − 1 : H ¯ ∗ → H ∗ {\displaystyle \operatorname {Cong} ^{-1}~:~{\overline {H}}^{*}\to H^{*}} are antilinear isometries and consequently also homeomorphisms .
The inner products on the dual space H ∗ {\displaystyle H^{*}} and the anti-dual space H ¯ ∗ , {\displaystyle {\overline {H}}^{*},} denoted respectively by ⟨ ⋅ , ⋅ ⟩ H ∗ {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{H^{*}}} and ⟨ ⋅ , ⋅ ⟩ H ¯ ∗ , {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{{\overline {H}}^{*}},} are related by ⟨ f ¯ | g ¯ ⟩ H ¯ ∗ = ⟨ f | g ⟩ H ∗ ¯ = ⟨ g | f ⟩ H ∗ for all f , g ∈ H ∗ {\displaystyle \langle \,{\overline {f}}\,|\,{\overline {g}}\,\rangle _{{\overline {H}}^{*}}={\overline {\langle \,f\,|\,g\,\rangle _{H^{*}}}}=\langle \,g\,|\,f\,\rangle _{H^{*}}\qquad {\text{ for all }}f,g\in H^{*}} and ⟨ f ¯ | g ¯ ⟩ H ∗ = ⟨ f | g ⟩ H ¯ ∗ ¯ = ⟨ g | f ⟩ H ¯ ∗ for all f , g ∈ H ¯ ∗ . {\displaystyle \langle \,{\overline {f}}\,|\,{\overline {g}}\,\rangle _{H^{*}}={\overline {\langle \,f\,|\,g\,\rangle _{{\overline {H}}^{*}}}}=\langle \,g\,|\,f\,\rangle _{{\overline {H}}^{*}}\qquad {\text{ for all }}f,g\in {\overline {H}}^{*}.}
If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then H ∗ = H ¯ ∗ {\displaystyle H^{*}={\overline {H}}^{*}} and this canonical map Cong : H ∗ → H ¯ ∗ {\displaystyle \operatorname {Cong} :H^{*}\to {\overline {H}}^{*}} reduces down to the identity map.
Two vectors x {\displaystyle x} and y {\displaystyle y} are orthogonal if ⟨ x , y ⟩ = 0 , {\displaystyle \langle x,y\rangle =0,} which happens if and only if ‖ y ‖ ≤ ‖ y + s x ‖ {\displaystyle \|y\|\leq \|y+sx\|} for all scalars s . {\displaystyle s.} [ 2 ] The orthogonal complement of a subset X ⊆ H {\displaystyle X\subseteq H} is X ⊥ := { y ∈ H : ⟨ y , x ⟩ = 0 for all x ∈ X } , {\displaystyle X^{\bot }:=\{\,y\in H:\langle y,x\rangle =0{\text{ for all }}x\in X\,\},} which is always a closed vector subspace of H . {\displaystyle H.} The Hilbert projection theorem guarantees that for any nonempty closed convex subset C {\displaystyle C} of a Hilbert space there exists a unique vector m ∈ C {\displaystyle m\in C} such that ‖ m ‖ = inf c ∈ C ‖ c ‖ ; {\displaystyle \|m\|=\inf _{c\in C}\|c\|;} that is, m ∈ C {\displaystyle m\in C} is the (unique) global minimum point of the function C → [ 0 , ∞ ) {\displaystyle C\to [0,\infty )} defined by c ↦ ‖ c ‖ . {\displaystyle c\mapsto \|c\|.}
Riesz representation theorem — Let H {\displaystyle H} be a Hilbert space whose inner product ⟨ x , y ⟩ {\displaystyle \left\langle x,y\right\rangle } is linear in its first argument and antilinear in its second argument and let ⟨ y ∣ x ⟩ := ⟨ x , y ⟩ {\displaystyle \langle y\mid x\rangle :=\langle x,y\rangle } be the corresponding physics notation. For every continuous linear functional φ ∈ H ∗ , {\displaystyle \varphi \in H^{*},} there exists a unique vector f φ ∈ H , {\displaystyle f_{\varphi }\in H,} called the Riesz representation of φ , {\displaystyle \varphi ,} such that [ 3 ] φ ( x ) = ⟨ x , f φ ⟩ = ⟨ f φ ∣ x ⟩ for all x ∈ H . {\displaystyle \varphi (x)=\left\langle x,f_{\varphi }\right\rangle =\left\langle f_{\varphi }\mid x\right\rangle \quad {\text{ for all }}x\in H.}
Importantly for complex Hilbert spaces, f φ {\displaystyle f_{\varphi }} is always located in the antilinear coordinate of the inner product. [ note 1 ]
Furthermore, the length of the representation vector is equal to the norm of the functional: ‖ f φ ‖ H = ‖ φ ‖ H ∗ , {\displaystyle \left\|f_{\varphi }\right\|_{H}=\|\varphi \|_{H^{*}},} and f φ {\displaystyle f_{\varphi }} is the unique vector f φ ∈ ( ker φ ) ⊥ {\displaystyle f_{\varphi }\in \left(\ker \varphi \right)^{\bot }} with φ ( f φ ) = ‖ φ ‖ 2 . {\displaystyle \varphi \left(f_{\varphi }\right)=\|\varphi \|^{2}.} It is also the unique element of minimum norm in C := φ − 1 ( ‖ φ ‖ 2 ) {\displaystyle C:=\varphi ^{-1}\left(\|\varphi \|^{2}\right)} ; that is to say, f φ {\displaystyle f_{\varphi }} is the unique element of C {\displaystyle C} satisfying ‖ f φ ‖ = inf c ∈ C ‖ c ‖ . {\displaystyle \left\|f_{\varphi }\right\|=\inf _{c\in C}\|c\|.} Moreover, any non-zero q ∈ ( ker φ ) ⊥ {\displaystyle q\in (\ker \varphi )^{\bot }} can be written as q = ( ‖ q ‖ 2 / φ ( q ) ¯ ) f φ . {\displaystyle q=\left(\|q\|^{2}/\,{\overline {\varphi (q)}}\right)\ f_{\varphi }.}
Corollary — The canonical map from H {\displaystyle H} into its dual H ∗ {\displaystyle H^{*}} [ 1 ] is the injective anti linear operator isometry [ note 2 ] [ 1 ] Φ : H → H ∗ y ↦ ⟨ ⋅ , y ⟩ = ⟨ y | ⋅ ⟩ {\displaystyle {\begin{alignedat}{4}\Phi :\;&&H&&\;\to \;&H^{*}\\[0.3ex]&&y&&\;\mapsto \;&\langle \,\cdot \,,y\rangle =\langle y|\,\cdot \,\rangle \\\end{alignedat}}} The Riesz representation theorem states that this map is surjective (and thus bijective ) when H {\displaystyle H} is complete and that its inverse is the bijective isometric antilinear isomorphism Φ − 1 : H ∗ → H φ ↦ f φ . {\displaystyle {\begin{alignedat}{4}\Phi ^{-1}:\;&&H^{*}&&\;\to \;&H\\[0.3ex]&&\varphi &&\;\mapsto \;&f_{\varphi }\\\end{alignedat}}.} Consequently, every continuous linear functional on the Hilbert space H {\displaystyle H} can be written uniquely in the form ⟨ y | ⋅ ⟩ {\displaystyle \langle y\,|\,\cdot \,\rangle } [ 1 ] where ‖ ⟨ y | ⋅ ⟩ ‖ H ∗ = ‖ y ‖ H {\displaystyle \|\langle y\,|\cdot \rangle \|_{H^{*}}=\|y\|_{H}} for every y ∈ H . {\displaystyle y\in H.} The assignment y ↦ ⟨ y , ⋅ ⟩ = ⟨ ⋅ | y ⟩ {\displaystyle y\mapsto \langle y,\cdot \rangle =\langle \cdot \,|\,y\rangle } can also be viewed as a bijective linear isometry H → H ¯ ∗ {\displaystyle H\to {\overline {H}}^{*}} into the anti-dual space of H , {\displaystyle H,} [ 1 ] which is the complex conjugate vector space of the continuous dual space H ∗ . {\displaystyle H^{*}.}
The inner products on H {\displaystyle H} and H ∗ {\displaystyle H^{*}} are related by ⟨ Φ h , Φ k ⟩ H ∗ = ⟨ h , k ⟩ ¯ H = ⟨ k , h ⟩ H for all h , k ∈ H {\displaystyle \left\langle \Phi h,\Phi k\right\rangle _{H^{*}}={\overline {\langle h,k\rangle }}_{H}=\langle k,h\rangle _{H}\quad {\text{ for all }}h,k\in H} and similarly, ⟨ Φ − 1 φ , Φ − 1 ψ ⟩ H = ⟨ φ , ψ ⟩ ¯ H ∗ = ⟨ ψ , φ ⟩ H ∗ for all φ , ψ ∈ H ∗ . {\displaystyle \left\langle \Phi ^{-1}\varphi ,\Phi ^{-1}\psi \right\rangle _{H}={\overline {\langle \varphi ,\psi \rangle }}_{H^{*}}=\left\langle \psi ,\varphi \right\rangle _{H^{*}}\quad {\text{ for all }}\varphi ,\psi \in H^{*}.}
The set C := φ − 1 ( ‖ φ ‖ 2 ) {\displaystyle C:=\varphi ^{-1}\left(\|\varphi \|^{2}\right)} satisfies C = f φ + ker φ {\displaystyle C=f_{\varphi }+\ker \varphi } and C − f φ = ker φ {\displaystyle C-f_{\varphi }=\ker \varphi } so when f φ ≠ 0 {\displaystyle f_{\varphi }\neq 0} then C {\displaystyle C} can be interpreted as being the affine hyperplane [ note 3 ] that is parallel to the vector subspace ker φ {\displaystyle \ker \varphi } and contains f φ . {\displaystyle f_{\varphi }.}
For y ∈ H , {\displaystyle y\in H,} the physics notation for the functional Φ ( y ) ∈ H ∗ {\displaystyle \Phi (y)\in H^{*}} is the bra ⟨ y | , {\displaystyle \langle y|,} where explicitly this means that ⟨ y | := Φ ( y ) , {\displaystyle \langle y|:=\Phi (y),} which complements the ket notation | y ⟩ {\displaystyle |y\rangle } defined by | y ⟩ := y . {\displaystyle |y\rangle :=y.} In the mathematical treatment of quantum mechanics , the theorem can be seen as a justification for the popular bra–ket notation . The theorem says that, every bra ⟨ ψ | {\displaystyle \langle \psi \,|} has a corresponding ket | ψ ⟩ , {\displaystyle |\,\psi \rangle ,} and the latter is unique.
Historically, the theorem is often attributed simultaneously to Riesz and Fréchet in 1907 (see references).
Let F {\displaystyle \mathbb {F} } denote the underlying scalar field of H . {\displaystyle H.}
Proof of norm formula:
Fix y ∈ H . {\displaystyle y\in H.} Define Λ : H → F {\displaystyle \Lambda :H\to \mathbb {F} } by Λ ( z ) := ⟨ y | z ⟩ , {\displaystyle \Lambda (z):=\langle \,y\,|\,z\,\rangle ,} which is a linear functional on H {\displaystyle H} since z {\displaystyle z} is in the linear argument.
By the Cauchy–Schwarz inequality , | Λ ( z ) | = | ⟨ y | z ⟩ | ≤ ‖ y ‖ ‖ z ‖ {\displaystyle |\Lambda (z)|=|\langle \,y\,|\,z\,\rangle |\leq \|y\|\|z\|} which shows that Λ {\displaystyle \Lambda } is bounded (equivalently, continuous ) and that ‖ Λ ‖ ≤ ‖ y ‖ . {\displaystyle \|\Lambda \|\leq \|y\|.} It remains to show that ‖ y ‖ ≤ ‖ Λ ‖ . {\displaystyle \|y\|\leq \|\Lambda \|.} By using y {\displaystyle y} in place of z , {\displaystyle z,} it follows that ‖ y ‖ 2 = ⟨ y | y ⟩ = Λ y = | Λ ( y ) | ≤ ‖ Λ ‖ ‖ y ‖ {\displaystyle \|y\|^{2}=\langle \,y\,|\,y\,\rangle =\Lambda y=|\Lambda (y)|\leq \|\Lambda \|\|y\|} (the equality Λ y = | Λ ( y ) | {\displaystyle \Lambda y=|\Lambda (y)|} holds because Λ y = ‖ y ‖ 2 ≥ 0 {\displaystyle \Lambda y=\|y\|^{2}\geq 0} is real and non-negative).
Thus that ‖ Λ ‖ = ‖ y ‖ . {\displaystyle \|\Lambda \|=\|y\|.} ◼ {\displaystyle \blacksquare }
The proof above did not use the fact that H {\displaystyle H} is complete , which shows that the formula for the norm ‖ ⟨ y | ⋅ ⟩ ‖ H ∗ = ‖ y ‖ H {\displaystyle \|\langle \,y\,|\,\cdot \,\rangle \|_{H^{*}}=\|y\|_{H}} holds more generally for all inner product spaces .
Proof that a Riesz representation of φ {\displaystyle \varphi } is unique:
Suppose f , g ∈ H {\displaystyle f,g\in H} are such that φ ( z ) = ⟨ f | z ⟩ {\displaystyle \varphi (z)=\langle \,f\,|\,z\,\rangle } and φ ( z ) = ⟨ g | z ⟩ {\displaystyle \varphi (z)=\langle \,g\,|\,z\,\rangle } for all z ∈ H . {\displaystyle z\in H.} Then ⟨ f − g | z ⟩ = ⟨ f | z ⟩ − ⟨ g | z ⟩ = φ ( z ) − φ ( z ) = 0 for all z ∈ H {\displaystyle \langle \,f-g\,|\,z\,\rangle =\langle \,f\,|\,z\,\rangle -\langle \,g\,|\,z\,\rangle =\varphi (z)-\varphi (z)=0\quad {\text{ for all }}z\in H} which shows that Λ := ⟨ f − g | ⋅ ⟩ {\displaystyle \Lambda :=\langle \,f-g\,|\,\cdot \,\rangle } is the constant 0 {\displaystyle 0} linear functional.
Consequently 0 = ‖ ⟨ f − g | ⋅ ⟩ ‖ = ‖ f − g ‖ , {\displaystyle 0=\|\langle \,f-g\,|\,\cdot \,\rangle \|=\|f-g\|,} which implies that f − g = 0. {\displaystyle f-g=0.} ◼ {\displaystyle \blacksquare }
Proof that a vector f φ {\displaystyle f_{\varphi }} representing φ {\displaystyle \varphi } exists:
Let K := ker φ := { m ∈ H : φ ( m ) = 0 } . {\displaystyle K:=\ker \varphi :=\{m\in H:\varphi (m)=0\}.} If K = H {\displaystyle K=H} (or equivalently, if φ = 0 {\displaystyle \varphi =0} ) then taking f φ := 0 {\displaystyle f_{\varphi }:=0} completes the proof so assume that K ≠ H {\displaystyle K\neq H} and φ ≠ 0. {\displaystyle \varphi \neq 0.} The continuity of φ {\displaystyle \varphi } implies that K {\displaystyle K} is a closed subspace of H {\displaystyle H} (because K = φ − 1 ( { 0 } ) {\displaystyle K=\varphi ^{-1}(\{0\})} and { 0 } {\displaystyle \{0\}} is a closed subset of F {\displaystyle \mathbb {F} } ).
Let K ⊥ := { v ∈ H : ⟨ v | k ⟩ = 0 for all k ∈ K } {\displaystyle K^{\bot }:=\{v\in H~:~\langle \,v\,|\,k\,\rangle =0~{\text{ for all }}k\in K\}} denote the orthogonal complement of K {\displaystyle K} in H . {\displaystyle H.} Because K {\displaystyle K} is closed and H {\displaystyle H} is a Hilbert space, [ note 4 ] H {\displaystyle H} can be written as the direct sum H = K ⊕ K ⊥ {\displaystyle H=K\oplus K^{\bot }} [ note 5 ] (a proof of this is given in the article on the Hilbert projection theorem ).
Because K ≠ H , {\displaystyle K\neq H,} there exists some non-zero p ∈ K ⊥ . {\displaystyle p\in K^{\bot }.} For any h ∈ H , {\displaystyle h\in H,} φ [ ( φ h ) p − ( φ p ) h ] = φ [ ( φ h ) p ] − φ [ ( φ p ) h ] = ( φ h ) φ p − ( φ p ) φ h = 0 , {\displaystyle \varphi [(\varphi h)p-(\varphi p)h]~=~\varphi [(\varphi h)p]-\varphi [(\varphi p)h]~=~(\varphi h)\varphi p-(\varphi p)\varphi h=0,} which shows that ( φ h ) p − ( φ p ) h ∈ ker φ = K , {\displaystyle (\varphi h)p-(\varphi p)h~\in ~\ker \varphi =K,} where now p ∈ K ⊥ {\displaystyle p\in K^{\bot }} implies 0 = ⟨ p | ( φ h ) p − ( φ p ) h ⟩ = ⟨ p | ( φ h ) p ⟩ − ⟨ p | ( φ p ) h ⟩ = ( φ h ) ⟨ p | p ⟩ − ( φ p ) ⟨ p | h ⟩ . {\displaystyle 0=\langle \,p\,|\,(\varphi h)p-(\varphi p)h\,\rangle ~=~\langle \,p\,|\,(\varphi h)p\,\rangle -\langle \,p\,|\,(\varphi p)h\,\rangle ~=~(\varphi h)\langle \,p\,|\,p\,\rangle -(\varphi p)\langle \,p\,|\,h\,\rangle .} Solving for φ h {\displaystyle \varphi h} shows that φ h = ( φ p ) ⟨ p | h ⟩ ‖ p ‖ 2 = ⟨ φ p ¯ ‖ p ‖ 2 p | h ⟩ for every h ∈ H , {\displaystyle \varphi h={\frac {(\varphi p)\langle \,p\,|\,h\,\rangle }{\|p\|^{2}}}=\left\langle \,{\frac {\overline {\varphi p}}{\|p\|^{2}}}p\,{\Bigg |}\,h\,\right\rangle \quad {\text{ for every }}h\in H,} which proves that the vector f φ := φ p ¯ ‖ p ‖ 2 p {\displaystyle f_{\varphi }:={\frac {\overline {\varphi p}}{\|p\|^{2}}}p} satisfies φ h = ⟨ f φ | h ⟩ for every h ∈ H . {\displaystyle \varphi h=\langle \,f_{\varphi }\,|\,h\,\rangle {\text{ for every }}h\in H.}
Applying the norm formula that was proved above with y := f φ {\displaystyle y:=f_{\varphi }} shows that ‖ φ ‖ H ∗ = ‖ ⟨ f φ | ⋅ ⟩ ‖ H ∗ = ‖ f φ ‖ H . {\displaystyle \|\varphi \|_{H^{*}}=\left\|\left\langle \,f_{\varphi }\,|\,\cdot \,\right\rangle \right\|_{H^{*}}=\left\|f_{\varphi }\right\|_{H}.} Also, the vector u := p ‖ p ‖ {\displaystyle u:={\frac {p}{\|p\|}}} has norm ‖ u ‖ = 1 {\displaystyle \|u\|=1} and satisfies f φ := φ ( u ) ¯ u . {\displaystyle f_{\varphi }:={\overline {\varphi (u)}}u.} ◼ {\displaystyle \blacksquare }
It can now be deduced that K ⊥ {\displaystyle K^{\bot }} is 1 {\displaystyle 1} -dimensional when φ ≠ 0. {\displaystyle \varphi \neq 0.} Let q ∈ K ⊥ {\displaystyle q\in K^{\bot }} be any non-zero vector. Replacing p {\displaystyle p} with q {\displaystyle q} in the proof above shows that the vector g := φ q ¯ ‖ q ‖ 2 q {\displaystyle g:={\frac {\overline {\varphi q}}{\|q\|^{2}}}q} satisfies φ ( h ) = ⟨ g | h ⟩ {\displaystyle \varphi (h)=\langle \,g\,|\,h\,\rangle } for every h ∈ H . {\displaystyle h\in H.} The uniqueness of the (non-zero) vector f φ {\displaystyle f_{\varphi }} representing φ {\displaystyle \varphi } implies that f φ = g , {\displaystyle f_{\varphi }=g,} which in turn implies that φ q ¯ ≠ 0 {\displaystyle {\overline {\varphi q}}\neq 0} and q = ‖ q ‖ 2 φ q ¯ f φ . {\displaystyle q={\frac {\|q\|^{2}}{\overline {\varphi q}}}f_{\varphi }.} Thus every vector in K ⊥ {\displaystyle K^{\bot }} is a scalar multiple of f φ . {\displaystyle f_{\varphi }.} ◼ {\displaystyle \blacksquare }
The formulas for the inner products follow from the polarization identity .
If φ ∈ H ∗ {\displaystyle \varphi \in H^{*}} then φ ( f φ ) = ⟨ f φ , f φ ⟩ = ‖ f φ ‖ 2 = ‖ φ ‖ 2 . {\displaystyle \varphi \left(f_{\varphi }\right)=\left\langle f_{\varphi },f_{\varphi }\right\rangle =\left\|f_{\varphi }\right\|^{2}=\|\varphi \|^{2}.} So in particular, φ ( f φ ) ≥ 0 {\displaystyle \varphi \left(f_{\varphi }\right)\geq 0} is always real and furthermore, φ ( f φ ) = 0 {\displaystyle \varphi \left(f_{\varphi }\right)=0} if and only if f φ = 0 {\displaystyle f_{\varphi }=0} if and only if φ = 0. {\displaystyle \varphi =0.}
Linear functionals as affine hyperplanes
A non-trivial continuous linear functional φ {\displaystyle \varphi } is often interpreted geometrically by identifying it with the affine hyperplane A := φ − 1 ( 1 ) {\displaystyle A:=\varphi ^{-1}(1)} (the kernel ker φ = φ − 1 ( 0 ) {\displaystyle \ker \varphi =\varphi ^{-1}(0)} is also often visualized alongside A := φ − 1 ( 1 ) {\displaystyle A:=\varphi ^{-1}(1)} although knowing A {\displaystyle A} is enough to reconstruct ker φ {\displaystyle \ker \varphi } because if A = ∅ {\displaystyle A=\varnothing } then ker φ = H {\displaystyle \ker \varphi =H} and otherwise ker φ = A − A {\displaystyle \ker \varphi =A-A} ). In particular, the norm of φ {\displaystyle \varphi } should somehow be interpretable as the "norm of the hyperplane A {\displaystyle A} ". When φ ≠ 0 {\displaystyle \varphi \neq 0} then the Riesz representation theorem provides such an interpretation of ‖ φ ‖ {\displaystyle \|\varphi \|} in terms of the affine hyperplane [ note 3 ] A := φ − 1 ( 1 ) {\displaystyle A:=\varphi ^{-1}(1)} as follows: using the notation from the theorem's statement, from ‖ φ ‖ 2 ≠ 0 {\displaystyle \|\varphi \|^{2}\neq 0} it follows that C := φ − 1 ( ‖ φ ‖ 2 ) = ‖ φ ‖ 2 φ − 1 ( 1 ) = ‖ φ ‖ 2 A {\displaystyle C:=\varphi ^{-1}\left(\|\varphi \|^{2}\right)=\|\varphi \|^{2}\varphi ^{-1}(1)=\|\varphi \|^{2}A} and so ‖ φ ‖ = ‖ f φ ‖ = inf c ∈ C ‖ c ‖ {\displaystyle \|\varphi \|=\left\|f_{\varphi }\right\|=\inf _{c\in C}\|c\|} implies ‖ φ ‖ = inf a ∈ A ‖ φ ‖ 2 ‖ a ‖ {\displaystyle \|\varphi \|=\inf _{a\in A}\|\varphi \|^{2}\|a\|} and thus ‖ φ ‖ = 1 inf a ∈ A ‖ a ‖ . {\displaystyle \|\varphi \|={\frac {1}{\inf _{a\in A}\|a\|}}.} This can also be seen by applying the Hilbert projection theorem to A {\displaystyle A} and concluding that the global minimum point of the map A → [ 0 , ∞ ) {\displaystyle A\to [0,\infty )} defined by a ↦ ‖ a ‖ {\displaystyle a\mapsto \|a\|} is f φ ‖ φ ‖ 2 ∈ A . {\displaystyle {\frac {f_{\varphi }}{\|\varphi \|^{2}}}\in A.} The formulas 1 inf a ∈ A ‖ a ‖ = sup a ∈ A 1 ‖ a ‖ {\displaystyle {\frac {1}{\inf _{a\in A}\|a\|}}=\sup _{a\in A}{\frac {1}{\|a\|}}} provide the promised interpretation of the linear functional's norm ‖ φ ‖ {\displaystyle \|\varphi \|} entirely in terms of its associated affine hyperplane A = φ − 1 ( 1 ) {\displaystyle A=\varphi ^{-1}(1)} (because with this formula, knowing only the set A {\displaystyle A} is enough to describe the norm of its associated linear functional ). Defining 1 ∞ := 0 , {\displaystyle {\frac {1}{\infty }}:=0,} the infimum formula ‖ φ ‖ = 1 inf a ∈ φ − 1 ( 1 ) ‖ a ‖ {\displaystyle \|\varphi \|={\frac {1}{\inf _{a\in \varphi ^{-1}(1)}\|a\|}}} will also hold when φ = 0. {\displaystyle \varphi =0.} When the supremum is taken in R {\displaystyle \mathbb {R} } (as is typically assumed), then the supremum of the empty set is sup ∅ = − ∞ {\displaystyle \sup \varnothing =-\infty } but if the supremum is taken in the non-negative reals [ 0 , ∞ ) {\displaystyle [0,\infty )} (which is the image /range of the norm ‖ ⋅ ‖ {\displaystyle \|\,\cdot \,\|} when dim H > 0 {\displaystyle \dim H>0} ) then this supremum is instead sup ∅ = 0 , {\displaystyle \sup \varnothing =0,} in which case the supremum formula ‖ φ ‖ = sup a ∈ φ − 1 ( 1 ) 1 ‖ a ‖ {\displaystyle \|\varphi \|=\sup _{a\in \varphi ^{-1}(1)}{\frac {1}{\|a\|}}} will also hold when φ = 0 {\displaystyle \varphi =0} (although the atypical equality sup ∅ = 0 {\displaystyle \sup \varnothing =0} is usually unexpected and so risks causing confusion).
Using the notation from the theorem above, several ways of constructing f φ {\displaystyle f_{\varphi }} from φ ∈ H ∗ {\displaystyle \varphi \in H^{*}} are now described.
If φ = 0 {\displaystyle \varphi =0} then f φ := 0 {\displaystyle f_{\varphi }:=0} ; in other words, f 0 = 0. {\displaystyle f_{0}=0.}
This special case of φ = 0 {\displaystyle \varphi =0} is henceforth assumed to be known, which is why some of the constructions given below start by assuming φ ≠ 0. {\displaystyle \varphi \neq 0.}
Orthogonal complement of kernel
If φ ≠ 0 {\displaystyle \varphi \neq 0} then for any 0 ≠ u ∈ ( ker φ ) ⊥ , {\displaystyle 0\neq u\in (\ker \varphi )^{\bot },} f φ := φ ( u ) ¯ u ‖ u ‖ 2 . {\displaystyle f_{\varphi }:={\frac {{\overline {\varphi (u)}}u}{\|u\|^{2}}}.}
If u ∈ ( ker φ ) ⊥ {\displaystyle u\in (\ker \varphi )^{\bot }} is a unit vector (meaning ‖ u ‖ = 1 {\displaystyle \|u\|=1} ) then f φ := φ ( u ) ¯ u {\displaystyle f_{\varphi }:={\overline {\varphi (u)}}u} (this is true even if φ = 0 {\displaystyle \varphi =0} because in this case f φ = φ ( u ) ¯ u = 0 ¯ u = 0 {\displaystyle f_{\varphi }={\overline {\varphi (u)}}u={\overline {0}}u=0} ).
If u {\displaystyle u} is a unit vector satisfying the above condition then the same is true of − u , {\displaystyle -u,} which is also a unit vector in ( ker φ ) ⊥ . {\displaystyle (\ker \varphi )^{\bot }.} However, φ ( − u ) ¯ ( − u ) = φ ( u ) ¯ u = f φ {\displaystyle {\overline {\varphi (-u)}}(-u)={\overline {\varphi (u)}}u=f_{\varphi }} so both these vectors result in the same f φ . {\displaystyle f_{\varphi }.}
Orthogonal projection onto kernel
If x ∈ H {\displaystyle x\in H} is such that φ ( x ) ≠ 0 {\displaystyle \varphi (x)\neq 0} and if x K {\displaystyle x_{K}} is the orthogonal projection of x {\displaystyle x} onto ker φ {\displaystyle \ker \varphi } then [ proof 1 ] f φ = ‖ φ ‖ 2 φ ( x ) ( x − x K ) . {\displaystyle f_{\varphi }={\frac {\|\varphi \|^{2}}{\varphi (x)}}\left(x-x_{K}\right).}
Orthonormal basis
Given an orthonormal basis { e i } i ∈ I {\displaystyle \left\{e_{i}\right\}_{i\in I}} of H {\displaystyle H} and a continuous linear functional φ ∈ H ∗ , {\displaystyle \varphi \in H^{*},} the vector f φ ∈ H {\displaystyle f_{\varphi }\in H} can be constructed uniquely by f φ = ∑ i ∈ I φ ( e i ) ¯ e i {\displaystyle f_{\varphi }=\sum _{i\in I}{\overline {\varphi \left(e_{i}\right)}}e_{i}} where all but at most countably many φ ( e i ) {\displaystyle \varphi \left(e_{i}\right)} will be equal to 0 {\displaystyle 0} and where the value of f φ {\displaystyle f_{\varphi }} does not actually depend on choice of orthonormal basis (that is, using any other orthonormal basis for H {\displaystyle H} will result in the same vector).
If y ∈ H {\displaystyle y\in H} is written as y = ∑ i ∈ I a i e i {\displaystyle y=\sum _{i\in I}a_{i}e_{i}} then φ ( y ) = ∑ i ∈ I φ ( e i ) a i = ⟨ f φ | y ⟩ {\displaystyle \varphi (y)=\sum _{i\in I}\varphi \left(e_{i}\right)a_{i}=\langle f_{\varphi }|y\rangle } and ‖ f φ ‖ 2 = φ ( f φ ) = ∑ i ∈ I φ ( e i ) φ ( e i ) ¯ = ∑ i ∈ I | φ ( e i ) | 2 = ‖ φ ‖ 2 . {\displaystyle \left\|f_{\varphi }\right\|^{2}=\varphi \left(f_{\varphi }\right)=\sum _{i\in I}\varphi \left(e_{i}\right){\overline {\varphi \left(e_{i}\right)}}=\sum _{i\in I}\left|\varphi \left(e_{i}\right)\right|^{2}=\|\varphi \|^{2}.}
If the orthonormal basis { e i } i ∈ I = { e i } i = 1 ∞ {\displaystyle \left\{e_{i}\right\}_{i\in I}=\left\{e_{i}\right\}_{i=1}^{\infty }} is a sequence then this becomes f φ = φ ( e 1 ) ¯ e 1 + φ ( e 2 ) ¯ e 2 + ⋯ {\displaystyle f_{\varphi }={\overline {\varphi \left(e_{1}\right)}}e_{1}+{\overline {\varphi \left(e_{2}\right)}}e_{2}+\cdots } and if y ∈ H {\displaystyle y\in H} is written as y = ∑ i ∈ I a i e i = a 1 e 1 + a 2 e 2 + ⋯ {\displaystyle y=\sum _{i\in I}a_{i}e_{i}=a_{1}e_{1}+a_{2}e_{2}+\cdots } then φ ( y ) = φ ( e 1 ) a 1 + φ ( e 2 ) a 2 + ⋯ = ⟨ f φ | y ⟩ . {\displaystyle \varphi (y)=\varphi \left(e_{1}\right)a_{1}+\varphi \left(e_{2}\right)a_{2}+\cdots =\langle f_{\varphi }|y\rangle .}
Consider the special case of H = C n {\displaystyle H=\mathbb {C} ^{n}} (where n > 0 {\displaystyle n>0} is an integer) with the standard inner product ⟨ z ∣ w ⟩ := z → ¯ T w → for all w , z ∈ H {\displaystyle \langle z\mid w\rangle :={\overline {\,{\vec {z}}\,\,}}^{\operatorname {T} }{\vec {w}}\qquad {\text{ for all }}\;w,z\in H} where w and z {\displaystyle w{\text{ and }}z} are represented as column matrices w → := [ w 1 ⋮ w n ] {\displaystyle {\vec {w}}:={\begin{bmatrix}w_{1}\\\vdots \\w_{n}\end{bmatrix}}} and z → := [ z 1 ⋮ z n ] {\displaystyle {\vec {z}}:={\begin{bmatrix}z_{1}\\\vdots \\z_{n}\end{bmatrix}}} with respect to the standard orthonormal basis e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} on H {\displaystyle H} (here, e i {\displaystyle e_{i}} is 1 {\displaystyle 1} at its i {\displaystyle i} th coordinate and 0 {\displaystyle 0} everywhere else; as usual, H ∗ {\displaystyle H^{*}} will now be associated with the dual basis ) and where z → ¯ T := [ z 1 ¯ , … , z n ¯ ] {\displaystyle {\overline {\,{\vec {z}}\,}}^{\operatorname {T} }:=\left[{\overline {z_{1}}},\ldots ,{\overline {z_{n}}}\right]} denotes the conjugate transpose of z → . {\displaystyle {\vec {z}}.} Let φ ∈ H ∗ {\displaystyle \varphi \in H^{*}} be any linear functional and let φ 1 , … , φ n ∈ C {\displaystyle \varphi _{1},\ldots ,\varphi _{n}\in \mathbb {C} } be the unique scalars such that φ ( w 1 , … , w n ) = φ 1 w 1 + ⋯ + φ n w n for all w := ( w 1 , … , w n ) ∈ H , {\displaystyle \varphi \left(w_{1},\ldots ,w_{n}\right)=\varphi _{1}w_{1}+\cdots +\varphi _{n}w_{n}\qquad {\text{ for all }}\;w:=\left(w_{1},\ldots ,w_{n}\right)\in H,} where it can be shown that φ i = φ ( e i ) {\displaystyle \varphi _{i}=\varphi \left(e_{i}\right)} for all i = 1 , … , n . {\displaystyle i=1,\ldots ,n.} Then the Riesz representation of φ {\displaystyle \varphi } is the vector f φ := φ 1 ¯ e 1 + ⋯ + φ n ¯ e n = ( φ 1 ¯ , … , φ n ¯ ) ∈ H . {\displaystyle f_{\varphi }~:=~{\overline {\varphi _{1}}}e_{1}+\cdots +{\overline {\varphi _{n}}}e_{n}~=~\left({\overline {\varphi _{1}}},\ldots ,{\overline {\varphi _{n}}}\right)\in H.} To see why, identify every vector w = ( w 1 , … , w n ) {\displaystyle w=\left(w_{1},\ldots ,w_{n}\right)} in H {\displaystyle H} with the column matrix w → := [ w 1 ⋮ w n ] {\displaystyle {\vec {w}}:={\begin{bmatrix}w_{1}\\\vdots \\w_{n}\end{bmatrix}}} so that f φ {\displaystyle f_{\varphi }} is identified with f φ → := [ φ 1 ¯ ⋮ φ n ¯ ] = [ φ ( e 1 ) ¯ ⋮ φ ( e n ) ¯ ] . {\displaystyle {\vec {f_{\varphi }}}:={\begin{bmatrix}{\overline {\varphi _{1}}}\\\vdots \\{\overline {\varphi _{n}}}\end{bmatrix}}={\begin{bmatrix}{\overline {\varphi \left(e_{1}\right)}}\\\vdots \\{\overline {\varphi \left(e_{n}\right)}}\end{bmatrix}}.} As usual, also identify the linear functional φ {\displaystyle \varphi } with its transformation matrix , which is the row matrix φ → := [ φ 1 , … , φ n ] {\displaystyle {\vec {\varphi }}:=\left[\varphi _{1},\ldots ,\varphi _{n}\right]} so that f φ → := φ → ¯ T {\displaystyle {\vec {f_{\varphi }}}:={\overline {\,{\vec {\varphi }}\,\,}}^{\operatorname {T} }} and the function φ {\displaystyle \varphi } is the assignment w → ↦ φ → w → , {\displaystyle {\vec {w}}\mapsto {\vec {\varphi }}\,{\vec {w}},} where the right hand side is matrix multiplication . Then for all w = ( w 1 , … , w n ) ∈ H , {\displaystyle w=\left(w_{1},\ldots ,w_{n}\right)\in H,} φ ( w ) = φ 1 w 1 + ⋯ + φ n w n = [ φ 1 , … , φ n ] [ w 1 ⋮ w n ] = [ φ 1 ¯ ⋮ φ n ¯ ] ¯ T w → = f φ → ¯ T w → = ⟨ f φ ∣ w ⟩ , {\displaystyle \varphi (w)=\varphi _{1}w_{1}+\cdots +\varphi _{n}w_{n}=\left[\varphi _{1},\ldots ,\varphi _{n}\right]{\begin{bmatrix}w_{1}\\\vdots \\w_{n}\end{bmatrix}}={\overline {\begin{bmatrix}{\overline {\varphi _{1}}}\\\vdots \\{\overline {\varphi _{n}}}\end{bmatrix}}}^{\operatorname {T} }{\vec {w}}={\overline {\,{\vec {f_{\varphi }}}\,\,}}^{\operatorname {T} }{\vec {w}}=\left\langle \,\,f_{\varphi }\,\mid \,w\,\right\rangle ,} which shows that f φ {\displaystyle f_{\varphi }} satisfies the defining condition of the Riesz representation of φ . {\displaystyle \varphi .} The bijective antilinear isometry Φ : H → H ∗ {\displaystyle \Phi :H\to H^{*}} defined in the corollary to the Riesz representation theorem is the assignment that sends z = ( z 1 , … , z n ) ∈ H {\displaystyle z=\left(z_{1},\ldots ,z_{n}\right)\in H} to the linear functional Φ ( z ) ∈ H ∗ {\displaystyle \Phi (z)\in H^{*}} on H {\displaystyle H} defined by w = ( w 1 , … , w n ) ↦ ⟨ z ∣ w ⟩ = z 1 ¯ w 1 + ⋯ + z n ¯ w n , {\displaystyle w=\left(w_{1},\ldots ,w_{n}\right)~\mapsto ~\langle \,z\,\mid \,w\,\rangle ={\overline {z_{1}}}w_{1}+\cdots +{\overline {z_{n}}}w_{n},} where under the identification of vectors in H {\displaystyle H} with column matrices and vector in H ∗ {\displaystyle H^{*}} with row matrices, Φ {\displaystyle \Phi } is just the assignment z → = [ z 1 ⋮ z n ] ↦ z → ¯ T = [ z 1 ¯ , … , z n ¯ ] . {\displaystyle {\vec {z}}={\begin{bmatrix}z_{1}\\\vdots \\z_{n}\end{bmatrix}}~\mapsto ~{\overline {\,{\vec {z}}\,}}^{\operatorname {T} }=\left[{\overline {z_{1}}},\ldots ,{\overline {z_{n}}}\right].} As described in the corollary, Φ {\displaystyle \Phi } 's inverse Φ − 1 : H ∗ → H {\displaystyle \Phi ^{-1}:H^{*}\to H} is the antilinear isometry φ ↦ f φ , {\displaystyle \varphi \mapsto f_{\varphi },} which was just shown above to be: φ ↦ f φ := ( φ ( e 1 ) ¯ , … , φ ( e n ) ¯ ) ; {\displaystyle \varphi ~\mapsto ~f_{\varphi }~:=~\left({\overline {\varphi \left(e_{1}\right)}},\ldots ,{\overline {\varphi \left(e_{n}\right)}}\right);} where in terms of matrices, Φ − 1 {\displaystyle \Phi ^{-1}} is the assignment φ → = [ φ 1 , … , φ n ] ↦ φ → ¯ T = [ φ 1 ¯ ⋮ φ n ¯ ] . {\displaystyle {\vec {\varphi }}=\left[\varphi _{1},\ldots ,\varphi _{n}\right]~\mapsto ~{\overline {\,{\vec {\varphi }}\,\,}}^{\operatorname {T} }={\begin{bmatrix}{\overline {\varphi _{1}}}\\\vdots \\{\overline {\varphi _{n}}}\end{bmatrix}}.} Thus in terms of matrices, each of Φ : H → H ∗ {\displaystyle \Phi :H\to H^{*}} and Φ − 1 : H ∗ → H {\displaystyle \Phi ^{-1}:H^{*}\to H} is just the operation of conjugate transposition v → ↦ v → ¯ T {\displaystyle {\vec {v}}\mapsto {\overline {\,{\vec {v}}\,}}^{\operatorname {T} }} (although between different spaces of matrices: if H {\displaystyle H} is identified with the space of all column (respectively, row) matrices then H ∗ {\displaystyle H^{*}} is identified with the space of all row (respectively, column matrices).
This example used the standard inner product, which is the map ⟨ z ∣ w ⟩ := z → ¯ T w → , {\displaystyle \langle z\mid w\rangle :={\overline {\,{\vec {z}}\,\,}}^{\operatorname {T} }{\vec {w}},} but if a different inner product is used, such as ⟨ z ∣ w ⟩ M := z → ¯ T M w → {\displaystyle \langle z\mid w\rangle _{M}:={\overline {\,{\vec {z}}\,\,}}^{\operatorname {T} }\,M\,{\vec {w}}\,} where M {\displaystyle M} is any Hermitian positive-definite matrix , or if a different orthonormal basis is used then the transformation matrices, and thus also the above formulas, will be different.
Assume that H {\displaystyle H} is a complex Hilbert space with inner product ⟨ ⋅ ∣ ⋅ ⟩ . {\displaystyle \langle \,\cdot \mid \cdot \,\rangle .} When the Hilbert space H {\displaystyle H} is reinterpreted as a real Hilbert space then it will be denoted by H R , {\displaystyle H_{\mathbb {R} },} where the (real) inner-product on H R {\displaystyle H_{\mathbb {R} }} is the real part of H {\displaystyle H} 's inner product; that is: ⟨ x , y ⟩ R := re ⟨ x , y ⟩ . {\displaystyle \langle x,y\rangle _{\mathbb {R} }:=\operatorname {re} \langle x,y\rangle .}
The norm on H R {\displaystyle H_{\mathbb {R} }} induced by ⟨ ⋅ , ⋅ ⟩ R {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{\mathbb {R} }} is equal to the original norm on H {\displaystyle H} and the continuous dual space of H R {\displaystyle H_{\mathbb {R} }} is the set of all real -valued bounded R {\displaystyle \mathbb {R} } -linear functionals on H R {\displaystyle H_{\mathbb {R} }} (see the article about the polarization identity for additional details about this relationship).
Let ψ R := re ψ {\displaystyle \psi _{\mathbb {R} }:=\operatorname {re} \psi } and ψ i := im ψ {\displaystyle \psi _{i}:=\operatorname {im} \psi } denote the real and imaginary parts of a linear functional ψ , {\displaystyle \psi ,} so that ψ = re ψ + i im ψ = ψ R + i ψ i . {\displaystyle \psi =\operatorname {re} \psi +i\operatorname {im} \psi =\psi _{\mathbb {R} }+i\psi _{i}.} The formula expressing a linear functional in terms of its real part is ψ ( h ) = ψ R ( h ) − i ψ R ( i h ) for h ∈ H , {\displaystyle \psi (h)=\psi _{\mathbb {R} }(h)-i\psi _{\mathbb {R} }(ih)\quad {\text{ for }}h\in H,} where ψ i ( h ) = − i ψ R ( i h ) {\displaystyle \psi _{i}(h)=-i\psi _{\mathbb {R} }(ih)} for all h ∈ H . {\displaystyle h\in H.} It follows that ker ψ R = ψ − 1 ( i R ) , {\displaystyle \ker \psi _{\mathbb {R} }=\psi ^{-1}(i\mathbb {R} ),} and that ψ = 0 {\displaystyle \psi =0} if and only if ψ R = 0. {\displaystyle \psi _{\mathbb {R} }=0.} It can also be shown that ‖ ψ ‖ = ‖ ψ R ‖ = ‖ ψ i ‖ {\displaystyle \|\psi \|=\left\|\psi _{\mathbb {R} }\right\|=\left\|\psi _{i}\right\|} where ‖ ψ R ‖ := sup ‖ h ‖ ≤ 1 | ψ R ( h ) | {\displaystyle \left\|\psi _{\mathbb {R} }\right\|:=\sup _{\|h\|\leq 1}\left|\psi _{\mathbb {R} }(h)\right|} and ‖ ψ i ‖ := sup ‖ h ‖ ≤ 1 | ψ i ( h ) | {\displaystyle \left\|\psi _{i}\right\|:=\sup _{\|h\|\leq 1}\left|\psi _{i}(h)\right|} are the usual operator norms .
In particular, a linear functional ψ {\displaystyle \psi } is bounded if and only if its real part ψ R {\displaystyle \psi _{\mathbb {R} }} is bounded.
Representing a functional and its real part
The Riesz representation of a continuous linear function φ {\displaystyle \varphi } on a complex Hilbert space is equal to the Riesz representation of its real part re φ {\displaystyle \operatorname {re} \varphi } on its associated real Hilbert space.
Explicitly, let φ ∈ H ∗ {\displaystyle \varphi \in H^{*}} and as above, let f φ ∈ H {\displaystyle f_{\varphi }\in H} be the Riesz representation of φ {\displaystyle \varphi } obtained in ( H , ⟨ , ⋅ , ⋅ ⟩ ) , {\displaystyle (H,\langle ,\cdot ,\cdot \rangle ),} so it is the unique vector that satisfies φ ( x ) = ⟨ f φ ∣ x ⟩ {\displaystyle \varphi (x)=\left\langle f_{\varphi }\mid x\right\rangle } for all x ∈ H . {\displaystyle x\in H.} The real part of φ {\displaystyle \varphi } is a continuous real linear functional on H R {\displaystyle H_{\mathbb {R} }} and so the Riesz representation theorem may be applied to φ R := re φ {\displaystyle \varphi _{\mathbb {R} }:=\operatorname {re} \varphi } and the associated real Hilbert space ( H R , ⟨ , ⋅ , ⋅ ⟩ R ) {\displaystyle \left(H_{\mathbb {R} },\langle ,\cdot ,\cdot \rangle _{\mathbb {R} }\right)} to produce its Riesz representation, which will be denoted by f φ R . {\displaystyle f_{\varphi _{\mathbb {R} }}.} That is, f φ R {\displaystyle f_{\varphi _{\mathbb {R} }}} is the unique vector in H R {\displaystyle H_{\mathbb {R} }} that satisfies φ R ( x ) = ⟨ f φ R ∣ x ⟩ R {\displaystyle \varphi _{\mathbb {R} }(x)=\left\langle f_{\varphi _{\mathbb {R} }}\mid x\right\rangle _{\mathbb {R} }} for all x ∈ H . {\displaystyle x\in H.} The conclusion is f φ R = f φ . {\displaystyle f_{\varphi _{\mathbb {R} }}=f_{\varphi }.} This follows from the main theorem because ker φ R = φ − 1 ( i R ) {\displaystyle \ker \varphi _{\mathbb {R} }=\varphi ^{-1}(i\mathbb {R} )} and if x ∈ H {\displaystyle x\in H} then ⟨ f φ ∣ x ⟩ R = re ⟨ f φ ∣ x ⟩ = re φ ( x ) = φ R ( x ) {\displaystyle \left\langle f_{\varphi }\mid x\right\rangle _{\mathbb {R} }=\operatorname {re} \left\langle f_{\varphi }\mid x\right\rangle =\operatorname {re} \varphi (x)=\varphi _{\mathbb {R} }(x)} and consequently, if m ∈ ker φ R {\displaystyle m\in \ker \varphi _{\mathbb {R} }} then ⟨ f φ ∣ m ⟩ R = 0 , {\displaystyle \left\langle f_{\varphi }\mid m\right\rangle _{\mathbb {R} }=0,} which shows that f φ ∈ ( ker φ R ) ⊥ R . {\displaystyle f_{\varphi }\in (\ker \varphi _{\mathbb {R} })^{\perp _{\mathbb {R} }}.} Moreover, φ ( f φ ) = ‖ φ ‖ 2 {\displaystyle \varphi (f_{\varphi })=\|\varphi \|^{2}} being a real number implies that φ R ( f φ ) = re φ ( f φ ) = ‖ φ ‖ 2 . {\displaystyle \varphi _{\mathbb {R} }(f_{\varphi })=\operatorname {re} \varphi (f_{\varphi })=\|\varphi \|^{2}.} In other words, in the theorem and constructions above, if H {\displaystyle H} is replaced with its real Hilbert space counterpart H R {\displaystyle H_{\mathbb {R} }} and if φ {\displaystyle \varphi } is replaced with re φ {\displaystyle \operatorname {re} \varphi } then f φ = f re φ . {\displaystyle f_{\varphi }=f_{\operatorname {re} \varphi }.} This means that vector f φ {\displaystyle f_{\varphi }} obtained by using ( H R , ⟨ , ⋅ , ⋅ ⟩ R ) {\displaystyle \left(H_{\mathbb {R} },\langle ,\cdot ,\cdot \rangle _{\mathbb {R} }\right)} and the real linear functional re φ {\displaystyle \operatorname {re} \varphi } is the equal to the vector obtained by using the origin complex Hilbert space ( H , ⟨ , ⋅ , ⋅ ⟩ ) {\displaystyle \left(H,\left\langle ,\cdot ,\cdot \right\rangle \right)} and original complex linear functional φ {\displaystyle \varphi } (with identical norm values as well).
Furthermore, if φ ≠ 0 {\displaystyle \varphi \neq 0} then f φ {\displaystyle f_{\varphi }} is perpendicular to ker φ R {\displaystyle \ker \varphi _{\mathbb {R} }} with respect to ⟨ ⋅ , ⋅ ⟩ R {\displaystyle \langle \cdot ,\cdot \rangle _{\mathbb {R} }} where the kernel of φ {\displaystyle \varphi } is be a proper subspace of the kernel of its real part φ R . {\displaystyle \varphi _{\mathbb {R} }.} Assume now that φ ≠ 0. {\displaystyle \varphi \neq 0.} Then f φ ∉ ker φ R {\displaystyle f_{\varphi }\not \in \ker \varphi _{\mathbb {R} }} because φ R ( f φ ) = φ ( f φ ) = ‖ φ ‖ 2 ≠ 0 {\displaystyle \varphi _{\mathbb {R} }\left(f_{\varphi }\right)=\varphi \left(f_{\varphi }\right)=\|\varphi \|^{2}\neq 0} and ker φ {\displaystyle \ker \varphi } is a proper subset of ker φ R . {\displaystyle \ker \varphi _{\mathbb {R} }.} The vector subspace ker φ {\displaystyle \ker \varphi } has real codimension 1 {\displaystyle 1} in ker φ R , {\displaystyle \ker \varphi _{\mathbb {R} },} while ker φ R {\displaystyle \ker \varphi _{\mathbb {R} }} has real codimension 1 {\displaystyle 1} in H R , {\displaystyle H_{\mathbb {R} },} and ⟨ f φ , ker φ R ⟩ R = 0. {\displaystyle \left\langle f_{\varphi },\ker \varphi _{\mathbb {R} }\right\rangle _{\mathbb {R} }=0.} That is, f φ {\displaystyle f_{\varphi }} is perpendicular to ker φ R {\displaystyle \ker \varphi _{\mathbb {R} }} with respect to ⟨ ⋅ , ⋅ ⟩ R . {\displaystyle \langle \cdot ,\cdot \rangle _{\mathbb {R} }.}
Induced linear map into anti-dual
The map defined by placing y {\displaystyle y} into the linear coordinate of the inner product and letting the variable h ∈ H {\displaystyle h\in H} vary over the antilinear coordinate results in an antilinear functional : ⟨ ⋅ ∣ y ⟩ = ⟨ y , ⋅ ⟩ : H → F defined by h ↦ ⟨ h ∣ y ⟩ = ⟨ y , h ⟩ . {\displaystyle \langle \,\cdot \mid y\,\rangle =\langle \,y,\cdot \,\rangle :H\to \mathbb {F} \quad {\text{ defined by }}\quad h\mapsto \langle \,h\mid y\,\rangle =\langle \,y,h\,\rangle .}
This map is an element of H ¯ ∗ , {\displaystyle {\overline {H}}^{*},} which is the continuous anti-dual space of H . {\displaystyle H.} The canonical map from H {\displaystyle H} into its anti-dual H ¯ ∗ {\displaystyle {\overline {H}}^{*}} [ 1 ] is the linear operator In H H ¯ ∗ : H → H ¯ ∗ y ↦ ⟨ ⋅ ∣ y ⟩ = ⟨ y , ⋅ ⟩ {\displaystyle {\begin{alignedat}{4}\operatorname {In} _{H}^{{\overline {H}}^{*}}:\;&&H&&\;\to \;&{\overline {H}}^{*}\\[0.3ex]&&y&&\;\mapsto \;&\langle \,\cdot \mid y\,\rangle =\langle \,y,\cdot \,\rangle \\[0.3ex]\end{alignedat}}} which is also an injective isometry . [ 1 ] The Fundamental theorem of Hilbert spaces , which is related to Riesz representation theorem, states that this map is surjective (and thus bijective ). Consequently, every antilinear functional on H {\displaystyle H} can be written (uniquely) in this form. [ 1 ]
If Cong : H ∗ → H ¯ ∗ {\displaystyle \operatorname {Cong} :H^{*}\to {\overline {H}}^{*}} is the canonical anti linear bijective isometry f ↦ f ¯ {\displaystyle f\mapsto {\overline {f}}} that was defined above, then the following equality holds: Cong ∘ In H H ∗ = In H H ¯ ∗ . {\displaystyle \operatorname {Cong} ~\circ ~\operatorname {In} _{H}^{H^{*}}~=~\operatorname {In} _{H}^{{\overline {H}}^{*}}.}
Let ( H , ⟨ ⋅ , ⋅ ⟩ H ) {\displaystyle \left(H,\langle \cdot ,\cdot \rangle _{H}\right)} be a Hilbert space and as before, let ⟨ y | x ⟩ H := ⟨ x , y ⟩ H . {\displaystyle \langle y\,|\,x\rangle _{H}:=\langle x,y\rangle _{H}.} Let Φ : H → H ∗ g ↦ ⟨ g ∣ ⋅ ⟩ H = ⟨ ⋅ , g ⟩ H {\displaystyle {\begin{alignedat}{4}\Phi :\;&&H&&\;\to \;&H^{*}\\[0.3ex]&&g&&\;\mapsto \;&\left\langle \,g\mid \cdot \,\right\rangle _{H}=\left\langle \,\cdot ,g\,\right\rangle _{H}\\\end{alignedat}}} which is a bijective antilinear isometry that satisfies ( Φ h ) g = ⟨ h ∣ g ⟩ H = ⟨ g , h ⟩ H for all g , h ∈ H . {\displaystyle (\Phi h)g=\langle h\mid g\rangle _{H}=\langle g,h\rangle _{H}\quad {\text{ for all }}g,h\in H.}
Bras
Given a vector h ∈ H , {\displaystyle h\in H,} let ⟨ h | {\displaystyle \langle h\,|} denote the continuous linear functional Φ h {\displaystyle \Phi h} ; that is, ⟨ h | := Φ h {\displaystyle \langle h\,|~:=~\Phi h} so that this functional ⟨ h | {\displaystyle \langle h\,|} is defined by g ↦ ⟨ h ∣ g ⟩ H . {\displaystyle g\mapsto \left\langle \,h\mid g\,\right\rangle _{H}.} This map was denoted by ⟨ h ∣ ⋅ ⟩ {\displaystyle \left\langle h\mid \cdot \,\right\rangle } earlier in this article.
The assignment h ↦ ⟨ h | {\displaystyle h\mapsto \langle h|} is just the isometric antilinear isomorphism Φ : H → H ∗ , {\displaystyle \Phi ~:~H\to H^{*},} which is why ⟨ c g + h | = c ¯ ⟨ g ∣ + ⟨ h | {\displaystyle ~\langle cg+h\,|~=~{\overline {c}}\langle g\mid ~+~\langle h\,|~} holds for all g , h ∈ H {\displaystyle g,h\in H} and all scalars c . {\displaystyle c.} The result of plugging some given g ∈ H {\displaystyle g\in H} into the functional ⟨ h | {\displaystyle \langle h\,|} is the scalar ⟨ h | g ⟩ H = ⟨ g , h ⟩ H , {\displaystyle \langle h\,|\,g\rangle _{H}=\langle g,h\rangle _{H},} which may be denoted by ⟨ h ∣ g ⟩ . {\displaystyle \langle h\mid g\rangle .} [ note 6 ]
Bra of a linear functional
Given a continuous linear functional ψ ∈ H ∗ , {\displaystyle \psi \in H^{*},} let ⟨ ψ ∣ {\displaystyle \langle \psi \mid } denote the vector Φ − 1 ψ ∈ H {\displaystyle \Phi ^{-1}\psi \in H} ; that is, ⟨ ψ ∣ := Φ − 1 ψ . {\displaystyle \langle \psi \mid ~:=~\Phi ^{-1}\psi .}
The assignment ψ ↦ ⟨ ψ ∣ {\displaystyle \psi \mapsto \langle \psi \mid } is just the isometric antilinear isomorphism Φ − 1 : H ∗ → H , {\displaystyle \Phi ^{-1}~:~H^{*}\to H,} which is why ⟨ c ψ + ϕ ∣ = c ¯ ⟨ ψ ∣ + ⟨ ϕ ∣ {\displaystyle ~\langle c\psi +\phi \mid ~=~{\overline {c}}\langle \psi \mid ~+~\langle \phi \mid ~} holds for all ϕ , ψ ∈ H ∗ {\displaystyle \phi ,\psi \in H^{*}} and all scalars c . {\displaystyle c.}
The defining condition of the vector ⟨ ψ | ∈ H {\displaystyle \langle \psi |\in H} is the technically correct but unsightly equality ⟨ ⟨ ψ ∣ ∣ g ⟩ H = ψ g for all g ∈ H , {\displaystyle \left\langle \,\langle \psi \mid \,\mid g\right\rangle _{H}~=~\psi g\quad {\text{ for all }}g\in H,} which is why the notation ⟨ ψ ∣ g ⟩ {\displaystyle \left\langle \psi \mid g\right\rangle } is used in place of ⟨ ⟨ ψ ∣ ∣ g ⟩ H = ⟨ g , ⟨ ψ ∣ ⟩ H . {\displaystyle \left\langle \,\langle \psi \mid \,\mid g\right\rangle _{H}=\left\langle g,\,\langle \psi \mid \right\rangle _{H}.} With this notation, the defining condition becomes ⟨ ψ ∣ g ⟩ = ψ g for all g ∈ H . {\displaystyle \left\langle \psi \mid g\right\rangle ~=~\psi g\quad {\text{ for all }}g\in H.}
Kets
For any given vector g ∈ H , {\displaystyle g\in H,} the notation | g ⟩ {\displaystyle |\,g\rangle } is used to denote g {\displaystyle g} ; that is, ∣ g ⟩ := g . {\displaystyle \mid g\rangle :=g.}
The assignment g ↦ | g ⟩ {\displaystyle g\mapsto |\,g\rangle } is just the identity map Id H : H → H , {\displaystyle \operatorname {Id} _{H}:H\to H,} which is why ∣ c g + h ⟩ = c ∣ g ⟩ + ∣ h ⟩ {\displaystyle ~\mid cg+h\rangle ~=~c\mid g\rangle ~+~\mid h\rangle ~} holds for all g , h ∈ H {\displaystyle g,h\in H} and all scalars c . {\displaystyle c.}
The notation ⟨ h ∣ g ⟩ {\displaystyle \langle h\mid g\rangle } and ⟨ ψ ∣ g ⟩ {\displaystyle \langle \psi \mid g\rangle } is used in place of ⟨ h ∣ ∣ g ⟩ ⟩ H = ⟨ ∣ g ⟩ , h ⟩ H {\displaystyle \left\langle h\mid \,\mid g\rangle \,\right\rangle _{H}~=~\left\langle \mid g\rangle ,h\right\rangle _{H}} and ⟨ ψ ∣ ∣ g ⟩ ⟩ H = ⟨ g , ⟨ ψ ∣ ⟩ H , {\displaystyle \left\langle \psi \mid \,\mid g\rangle \,\right\rangle _{H}~=~\left\langle g,\,\langle \psi \mid \right\rangle _{H},} respectively. As expected, ⟨ ψ ∣ g ⟩ = ψ g {\displaystyle ~\langle \psi \mid g\rangle =\psi g~} and ⟨ h ∣ g ⟩ {\displaystyle ~\langle h\mid g\rangle ~} really is just the scalar ⟨ h ∣ g ⟩ H = ⟨ g , h ⟩ H . {\displaystyle ~\langle h\mid g\rangle _{H}~=~\langle g,h\rangle _{H}.}
Let A : H → Z {\displaystyle A:H\to Z} be a continuous linear operator between Hilbert spaces ( H , ⟨ ⋅ , ⋅ ⟩ H ) {\displaystyle \left(H,\langle \cdot ,\cdot \rangle _{H}\right)} and ( Z , ⟨ ⋅ , ⋅ ⟩ Z ) . {\displaystyle \left(Z,\langle \cdot ,\cdot \rangle _{Z}\right).} As before, let ⟨ y ∣ x ⟩ H := ⟨ x , y ⟩ H {\displaystyle \langle y\mid x\rangle _{H}:=\langle x,y\rangle _{H}} and ⟨ y ∣ x ⟩ Z := ⟨ x , y ⟩ Z . {\displaystyle \langle y\mid x\rangle _{Z}:=\langle x,y\rangle _{Z}.}
Denote by Φ H : H → H ∗ g ↦ ⟨ g ∣ ⋅ ⟩ H and Φ Z : Z → Z ∗ y ↦ ⟨ y ∣ ⋅ ⟩ Z {\displaystyle {\begin{alignedat}{4}\Phi _{H}:\;&&H&&\;\to \;&H^{*}\\[0.3ex]&&g&&\;\mapsto \;&\langle \,g\mid \cdot \,\rangle _{H}\\\end{alignedat}}\quad {\text{ and }}\quad {\begin{alignedat}{4}\Phi _{Z}:\;&&Z&&\;\to \;&Z^{*}\\[0.3ex]&&y&&\;\mapsto \;&\langle \,y\mid \cdot \,\rangle _{Z}\\\end{alignedat}}} the usual bijective antilinear isometries that satisfy: ( Φ H g ) h = ⟨ g ∣ h ⟩ H for all g , h ∈ H and ( Φ Z y ) z = ⟨ y ∣ z ⟩ Z for all y , z ∈ Z . {\displaystyle \left(\Phi _{H}g\right)h=\langle g\mid h\rangle _{H}\quad {\text{ for all }}g,h\in H\qquad {\text{ and }}\qquad \left(\Phi _{Z}y\right)z=\langle y\mid z\rangle _{Z}\quad {\text{ for all }}y,z\in Z.}
For every z ∈ Z , {\displaystyle z\in Z,} the scalar-valued map ⟨ z ∣ A ( ⋅ ) ⟩ Z {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}} [ note 7 ] on H {\displaystyle H} defined by h ↦ ⟨ z ∣ A h ⟩ Z = ⟨ A h , z ⟩ Z {\displaystyle h\mapsto \langle z\mid Ah\rangle _{Z}=\langle Ah,z\rangle _{Z}}
is a continuous linear functional on H {\displaystyle H} and so by the Riesz representation theorem, there exists a unique vector in H , {\displaystyle H,} denoted by A ∗ z , {\displaystyle A^{*}z,} such that ⟨ z ∣ A ( ⋅ ) ⟩ Z = ⟨ A ∗ z ∣ ⋅ ⟩ H , {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}=\left\langle A^{*}z\mid \cdot \,\right\rangle _{H},} or equivalently, such that ⟨ z ∣ A h ⟩ Z = ⟨ A ∗ z ∣ h ⟩ H for all h ∈ H . {\displaystyle \langle z\mid Ah\rangle _{Z}=\left\langle A^{*}z\mid h\right\rangle _{H}\quad {\text{ for all }}h\in H.}
The assignment z ↦ A ∗ z {\displaystyle z\mapsto A^{*}z} thus induces a function A ∗ : Z → H {\displaystyle A^{*}:Z\to H} called the adjoint of A : H → Z {\displaystyle A:H\to Z} whose defining condition is ⟨ z ∣ A h ⟩ Z = ⟨ A ∗ z ∣ h ⟩ H for all h ∈ H and all z ∈ Z . {\displaystyle \langle z\mid Ah\rangle _{Z}=\left\langle A^{*}z\mid h\right\rangle _{H}\quad {\text{ for all }}h\in H{\text{ and all }}z\in Z.} The adjoint A ∗ : Z → H {\displaystyle A^{*}:Z\to H} is necessarily a continuous (equivalently, a bounded ) linear operator .
If H {\displaystyle H} is finite dimensional with the standard inner product and if M {\displaystyle M} is the transformation matrix of A {\displaystyle A} with respect to the standard orthonormal basis then M {\displaystyle M} 's conjugate transpose M T ¯ {\displaystyle {\overline {M^{\operatorname {T} }}}} is the transformation matrix of the adjoint A ∗ . {\displaystyle A^{*}.}
It is also possible to define the transpose or algebraic adjoint of A : H → Z , {\displaystyle A:H\to Z,} which is the map t A : Z ∗ → H ∗ {\displaystyle {}^{t}A:Z^{*}\to H^{*}} defined by sending a continuous linear functionals ψ ∈ Z ∗ {\displaystyle \psi \in Z^{*}} to t A ( ψ ) := ψ ∘ A , {\displaystyle {}^{t}A(\psi ):=\psi \circ A,} where the composition ψ ∘ A {\displaystyle \psi \circ A} is always a continuous linear functional on H {\displaystyle H} and it satisfies ‖ A ‖ = ‖ t A ‖ {\displaystyle \|A\|=\left\|{}^{t}A\right\|} (this is true more generally, when H {\displaystyle H} and Z {\displaystyle Z} are merely normed spaces ). [ 5 ] So for example, if z ∈ Z {\displaystyle z\in Z} then t A {\displaystyle {}^{t}A} sends the continuous linear functional ⟨ z ∣ ⋅ ⟩ Z ∈ Z ∗ {\displaystyle \langle z\mid \cdot \rangle _{Z}\in Z^{*}} (defined on Z {\displaystyle Z} by g ↦ ⟨ z ∣ g ⟩ Z {\displaystyle g\mapsto \langle z\mid g\rangle _{Z}} ) to the continuous linear functional ⟨ z ∣ A ( ⋅ ) ⟩ Z ∈ H ∗ {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}\in H^{*}} (defined on H {\displaystyle H} by h ↦ ⟨ z ∣ A ( h ) ⟩ Z {\displaystyle h\mapsto \langle z\mid A(h)\rangle _{Z}} ); [ note 7 ] using bra-ket notation, this can be written as t A ⟨ z ∣ = ⟨ z ∣ A {\displaystyle {}^{t}A\langle z\mid ~=~\langle z\mid A} where the juxtaposition of ⟨ z ∣ {\displaystyle \langle z\mid } with A {\displaystyle A} on the right hand side denotes function composition: H → A Z → ⟨ z ∣ C . {\displaystyle H\xrightarrow {A} Z\xrightarrow {\langle z\mid } \mathbb {C} .}
The adjoint A ∗ : Z → H {\displaystyle A^{*}:Z\to H} is actually just to the transpose t A : Z ∗ → H ∗ {\displaystyle {}^{t}A:Z^{*}\to H^{*}} [ 2 ] when the Riesz representation theorem is used to identify Z {\displaystyle Z} with Z ∗ {\displaystyle Z^{*}} and H {\displaystyle H} with H ∗ . {\displaystyle H^{*}.}
Explicitly, the relationship between the adjoint and transpose is:
which can be rewritten as: A ∗ = Φ H − 1 ∘ t A ∘ Φ Z and t A = Φ H ∘ A ∗ ∘ Φ Z − 1 . {\displaystyle A^{*}~=~\Phi _{H}^{-1}~\circ ~{}^{t}A~\circ ~\Phi _{Z}\quad {\text{ and }}\quad {}^{t}A~=~\Phi _{H}~\circ ~A^{*}~\circ ~\Phi _{Z}^{-1}.}
To show that t A ∘ Φ Z = Φ H ∘ A ∗ , {\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*},} fix z ∈ Z . {\displaystyle z\in Z.} The definition of t A {\displaystyle {}^{t}A} implies ( t A ∘ Φ Z ) z = t A ( Φ Z z ) = ( Φ Z z ) ∘ A {\displaystyle \left({}^{t}A\circ \Phi _{Z}\right)z={}^{t}A\left(\Phi _{Z}z\right)=\left(\Phi _{Z}z\right)\circ A} so it remains to show that ( Φ Z z ) ∘ A = Φ H ( A ∗ z ) . {\displaystyle \left(\Phi _{Z}z\right)\circ A=\Phi _{H}\left(A^{*}z\right).} If h ∈ H {\displaystyle h\in H} then ( ( Φ Z z ) ∘ A ) h = ( Φ Z z ) ( A h ) = ⟨ z ∣ A h ⟩ Z = ⟨ A ∗ z ∣ h ⟩ H = ( Φ H ( A ∗ z ) ) h , {\displaystyle \left(\left(\Phi _{Z}z\right)\circ A\right)h=\left(\Phi _{Z}z\right)(Ah)=\langle z\mid Ah\rangle _{Z}=\langle A^{*}z\mid h\rangle _{H}=\left(\Phi _{H}(A^{*}z)\right)h,} as desired. ◼ {\displaystyle \blacksquare }
Alternatively, the value of the left and right hand sides of ( Adjoint-transpose ) at any given z ∈ Z {\displaystyle z\in Z} can be rewritten in terms of the inner products as: ( t A ∘ Φ Z ) z = ⟨ z ∣ A ( ⋅ ) ⟩ Z and ( Φ H ∘ A ∗ ) z = ⟨ A ∗ z ∣ ⋅ ⟩ H {\displaystyle \left({}^{t}A~\circ ~\Phi _{Z}\right)z=\langle z\mid A(\cdot )\rangle _{Z}\quad {\text{ and }}\quad \left(\Phi _{H}~\circ ~A^{*}\right)z=\langle A^{*}z\mid \cdot \,\rangle _{H}} so that t A ∘ Φ Z = Φ H ∘ A ∗ {\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*}} holds if and only if ⟨ z ∣ A ( ⋅ ) ⟩ Z = ⟨ A ∗ z ∣ ⋅ ⟩ H {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}=\langle A^{*}z\mid \cdot \,\rangle _{H}} holds; but the equality on the right holds by definition of A ∗ z . {\displaystyle A^{*}z.} The defining condition of A ∗ z {\displaystyle A^{*}z} can also be written ⟨ z ∣ A = ⟨ A ∗ z ∣ {\displaystyle \langle z\mid A~=~\langle A^{*}z\mid } if bra-ket notation is used.
Assume Z = H {\displaystyle Z=H} and let Φ := Φ H = Φ Z . {\displaystyle \Phi :=\Phi _{H}=\Phi _{Z}.} Let A : H → H {\displaystyle A:H\to H} be a continuous (that is, bounded) linear operator.
Whether or not A : H → H {\displaystyle A:H\to H} is self-adjoint , normal , or unitary depends entirely on whether or not A {\displaystyle A} satisfies certain defining conditions related to its adjoint, which was shown by ( Adjoint-transpose ) to essentially be just the transpose t A : H ∗ → H ∗ . {\displaystyle {}^{t}A:H^{*}\to H^{*}.} Because the transpose of A {\displaystyle A} is a map between continuous linear functionals, these defining conditions can consequently be re-expressed entirely in terms of linear functionals, as the remainder of subsection will now describe in detail.
The linear functionals that are involved are the simplest possible continuous linear functionals on H {\displaystyle H} that can be defined entirely in terms of A , {\displaystyle A,} the inner product ⟨ ⋅ ∣ ⋅ ⟩ {\displaystyle \langle \,\cdot \mid \cdot \,\rangle } on H , {\displaystyle H,} and some given vector h ∈ H . {\displaystyle h\in H.} Specifically, these are ⟨ A h ∣ ⋅ ⟩ {\displaystyle \left\langle Ah\mid \cdot \,\right\rangle } and ⟨ h ∣ A ( ⋅ ) ⟩ {\displaystyle \langle h\mid A(\cdot )\rangle } [ note 7 ] where ⟨ A h ∣ ⋅ ⟩ = Φ ( A h ) = ( Φ ∘ A ) h and ⟨ h ∣ A ( ⋅ ) ⟩ = ( t A ∘ Φ ) h . {\displaystyle \left\langle Ah\mid \cdot \,\right\rangle =\Phi (Ah)=(\Phi \circ A)h\quad {\text{ and }}\quad \langle h\mid A(\cdot )\rangle =\left({}^{t}A\circ \Phi \right)h.}
Self-adjoint operators
A continuous linear operator A : H → H {\displaystyle A:H\to H} is called self-adjoint if it is equal to its own adjoint; that is, if A = A ∗ . {\displaystyle A=A^{*}.} Using ( Adjoint-transpose ), this happens if and only if: Φ ∘ A = t A ∘ Φ {\displaystyle \Phi \circ A={}^{t}A\circ \Phi } where this equality can be rewritten in the following two equivalent forms: A = Φ − 1 ∘ t A ∘ Φ or t A = Φ ∘ A ∘ Φ − 1 . {\displaystyle A=\Phi ^{-1}\circ {}^{t}A\circ \Phi \quad {\text{ or }}\quad {}^{t}A=\Phi \circ A\circ \Phi ^{-1}.}
Unraveling notation and definitions produces the following characterization of self-adjoint operators in terms of the aforementioned continuous linear functionals: A {\displaystyle A} is self-adjoint if and only if for all z ∈ H , {\displaystyle z\in H,} the linear functional ⟨ z ∣ A ( ⋅ ) ⟩ {\displaystyle \langle z\mid A(\cdot )\rangle } [ note 7 ] is equal to the linear functional ⟨ A z ∣ ⋅ ⟩ {\displaystyle \langle Az\mid \cdot \,\rangle } ; that is, if and only if
where if bra-ket notation is used, this is ⟨ z ∣ A = ⟨ A z ∣ for all z ∈ H . {\displaystyle \langle z\mid A~=~\langle Az\mid \quad {\text{ for all }}z\in H.}
Normal operators
A continuous linear operator A : H → H {\displaystyle A:H\to H} is called normal if A A ∗ = A ∗ A , {\displaystyle AA^{*}=A^{*}A,} which happens if and only if for all z , h ∈ H , {\displaystyle z,h\in H,} ⟨ A A ∗ z ∣ h ⟩ = ⟨ A ∗ A z ∣ h ⟩ . {\displaystyle \left\langle AA^{*}z\mid h\right\rangle =\left\langle A^{*}Az\mid h\right\rangle .}
Using ( Adjoint-transpose ) and unraveling notation and definitions produces [ proof 2 ] the following characterization of normal operators in terms of inner products of continuous linear functionals: A {\displaystyle A} is a normal operator if and only if
where the left hand side is also equal to ⟨ A h ∣ A z ⟩ ¯ H = ⟨ A z ∣ A h ⟩ H . {\displaystyle {\overline {\langle Ah\mid Az\rangle }}_{H}=\langle Az\mid Ah\rangle _{H}.} The left hand side of this characterization involves only linear functionals of the form ⟨ A h ∣ ⋅ ⟩ {\displaystyle \langle Ah\mid \cdot \,\rangle } while the right hand side involves only linear functions of the form ⟨ h ∣ A ( ⋅ ) ⟩ {\displaystyle \langle h\mid A(\cdot )\rangle } (defined as above [ note 7 ] ).
So in plain English, characterization ( Normality functionals ) says that an operator is normal when the inner product of any two linear functions of the first form is equal to the inner product of their second form (using the same vectors z , h ∈ H {\displaystyle z,h\in H} for both forms).
In other words, if it happens to be the case (and when A {\displaystyle A} is injective or self-adjoint, it is) that the assignment of linear functionals ⟨ A h ∣ ⋅ ⟩ ↦ ⟨ h | A ( ⋅ ) ⟩ {\displaystyle \langle Ah\mid \cdot \,\rangle ~\mapsto ~\langle h|A(\cdot )\rangle } is well-defined (or alternatively, if ⟨ h | A ( ⋅ ) ⟩ ↦ ⟨ A h ∣ ⋅ ⟩ {\displaystyle \langle h|A(\cdot )\rangle ~\mapsto ~\langle Ah\mid \cdot \,\rangle } is well-defined) where h {\displaystyle h} ranges over H , {\displaystyle H,} then A {\displaystyle A} is a normal operator if and only if this assignment preserves the inner product on H ∗ . {\displaystyle H^{*}.}
The fact that every self-adjoint bounded linear operator is normal follows readily by direct substitution of A ∗ = A {\displaystyle A^{*}=A} into either side of A ∗ A = A A ∗ . {\displaystyle A^{*}A=AA^{*}.} This same fact also follows immediately from the direct substitution of the equalities ( Self-adjointness functionals ) into either side of ( Normality functionals ).
Alternatively, for a complex Hilbert space, the continuous linear operator A {\displaystyle A} is a normal operator if and only if ‖ A z ‖ = ‖ A ∗ z ‖ {\displaystyle \|Az\|=\left\|A^{*}z\right\|} for every z ∈ H , {\displaystyle z\in H,} [ 2 ] which happens if and only if ‖ A z ‖ H = ‖ ⟨ z | A ( ⋅ ) ⟩ ‖ H ∗ for every z ∈ H . {\displaystyle \|Az\|_{H}=\|\langle z\,|\,A(\cdot )\rangle \|_{H^{*}}\quad {\text{ for every }}z\in H.}
Unitary operators
An invertible bounded linear operator A : H → H {\displaystyle A:H\to H} is said to be unitary if its inverse is its adjoint: A − 1 = A ∗ . {\displaystyle A^{-1}=A^{*}.} By using ( Adjoint-transpose ), this is seen to be equivalent to Φ ∘ A − 1 = t A ∘ Φ . {\displaystyle \Phi \circ A^{-1}={}^{t}A\circ \Phi .} Unraveling notation and definitions, it follows that A {\displaystyle A} is unitary if and only if ⟨ A − 1 z ∣ ⋅ ⟩ = ⟨ z ∣ A ( ⋅ ) ⟩ for all z ∈ H . {\displaystyle \langle A^{-1}z\mid \cdot \,\rangle =\langle z\mid A(\cdot )\rangle \quad {\text{ for all }}z\in H.}
The fact that a bounded invertible linear operator A : H → H {\displaystyle A:H\to H} is unitary if and only if A ∗ A = Id H {\displaystyle A^{*}A=\operatorname {Id} _{H}} (or equivalently, t A ∘ Φ ∘ A = Φ {\displaystyle {}^{t}A\circ \Phi \circ A=\Phi } ) produces another (well-known) characterization: an invertible bounded linear map A {\displaystyle A} is unitary if and only if ⟨ A z ∣ A ( ⋅ ) ⟩ = ⟨ z ∣ ⋅ ⟩ for all z ∈ H . {\displaystyle \langle Az\mid A(\cdot )\,\rangle =\langle z\mid \cdot \,\rangle \quad {\text{ for all }}z\in H.}
Because A : H → H {\displaystyle A:H\to H} is invertible (and so in particular a bijection), this is also true of the transpose t A : H ∗ → H ∗ . {\displaystyle {}^{t}A:H^{*}\to H^{*}.} This fact also allows the vector z ∈ H {\displaystyle z\in H} in the above characterizations to be replaced with A z {\displaystyle Az} or A − 1 z , {\displaystyle A^{-1}z,} thereby producing many more equalities. Similarly, ⋅ {\displaystyle \,\cdot \,} can be replaced with A ( ⋅ ) {\displaystyle A(\cdot )} or A − 1 ( ⋅ ) . {\displaystyle A^{-1}(\cdot ).}
Proofs | https://en.wikipedia.org/wiki/Riesz_representation_theorem |
In mathematics, the Riesz–Markov–Kakutani representation theorem relates linear functionals on spaces of continuous functions on a locally compact space to measures in measure theory. The theorem is named for Frigyes Riesz ( 1909 ) who introduced it for continuous functions on the unit interval , Andrey Markov ( 1938 ) who extended the result to some non-compact spaces, and Shizuo Kakutani ( 1941 ) who extended the result to compact Hausdorff spaces .
There are many closely related variations of the theorem, as the linear functionals can be complex, real, or positive , the space they are defined on may be the unit interval or a compact space or a locally compact space , the continuous functions may be vanishing at infinity or have compact support , and the measures can be Baire measures or regular Borel measures or Radon measures or signed measures or complex measures .
The statement of the theorem for positive linear functionals on C c ( X ) , the space of compactly supported complex-valued continuous functions , is as follows:
Theorem Let X be a locally compact Hausdorff space and ψ {\displaystyle \psi } a positive linear functional on C c ( X ) . Then there exists a unique positive Borel measure μ {\displaystyle \mu } on X such that [ 1 ]
which has the following additional properties for some Σ {\displaystyle \Sigma } containing the Borel σ-algebra on X :
As such, if all open sets in X are σ-compact then μ {\displaystyle \mu } is a Radon measure . [ 2 ]
One approach to measure theory is to start with a Radon measure , defined as a positive linear functional on C c ( X ) . This is the way adopted by Bourbaki ; it does of course assume that X starts life as a topological space , rather than simply as a set. For locally compact spaces an integration theory is then recovered.
Without the condition of regularity the Borel measure need not be unique. For example, let X be the set of ordinals at most equal to the first uncountable ordinal Ω , with the topology generated by " open intervals ". The linear functional taking a continuous function to its value at Ω corresponds to the regular Borel measure with a point mass at Ω . However it also corresponds to the (non-regular) Borel measure that assigns measure 1 to any Borel set B ⊆ [ 0 , Ω ] {\displaystyle B\subseteq [0,\Omega ]} if there is closed and unbounded set C ⊆ [ 0 , Ω [ {\displaystyle C\subseteq [0,\Omega [} with C ⊆ B {\displaystyle C\subseteq B} , and assigns measure 0 to other Borel sets. (In particular the singleton { Ω } {\displaystyle \{\Omega \}} gets measure 0 , contrary to the point mass measure.)
The following representation, also referred to as the Riesz–Markov theorem , gives a concrete realisation of the topological dual space of C 0 ( X ) , the set of continuous functions on X which vanish at infinity .
Theorem Let X be a locally compact Hausdorff space. For any continuous linear functional ψ {\displaystyle \psi } on C 0 ( X ) , there is a unique complex-valued regular Borel measure μ {\displaystyle \mu } on X such that
A complex-valued Borel measure μ {\displaystyle \mu } is called regular if the positive measure | μ | {\displaystyle |\mu |} satisfies the regularity conditions defined above. The norm of ψ {\displaystyle \psi } as a linear functional is the total variation of μ {\displaystyle \mu } , that is
Finally, ψ {\displaystyle \psi } is positive if and only if the measure μ {\displaystyle \mu } is positive.
One can deduce this statement about linear functionals from the statement about positive linear functionals by first showing that a bounded linear functional can be written as a finite linear combination of positive ones.
In its original form by Frigyes Riesz ( 1909 ) the theorem states that every continuous linear functional A over the space C ([0, 1]) of continuous functions f in the interval [0, 1] can be represented as
where α ( x ) is a function of bounded variation on the interval [0, 1] , and the integral is a Riemann–Stieltjes integral . Since there is a one-to-one correspondence between Borel regular measures in the interval and functions of bounded variation (that assigns to each function of bounded variation the corresponding Lebesgue–Stieltjes measure, and the integral with respect to the Lebesgue–Stieltjes measure agrees with the Riemann–Stieltjes integral for continuous functions), the above stated theorem generalizes the original statement of F. Riesz. [ 3 ] | https://en.wikipedia.org/wiki/Riesz–Markov–Kakutani_representation_theorem |
Rietveld refinement is a technique described by Hugo Rietveld for use in the characterisation of crystalline materials. The neutron and X-ray diffraction of powder samples results in a pattern characterised by reflections (peaks in intensity) at certain positions. The height, width and position of these reflections can be used to determine many aspects of the material's structure.
The Rietveld method uses a least squares approach to refine a theoretical line profile until it
matches the measured profile. The introduction of this technique was a significant step forward in the
diffraction analysis of powder samples as, unlike other techniques at that time, it was able to deal reliably with strongly overlapping reflections.
The method was first implemented in 1967, [ 1 ] and reported in 1969 [ 2 ] for the diffraction of monochromatic neutrons where the reflection-position is reported in terms of the Bragg angle , 2 θ . This terminology will be used here although the technique is
equally applicable to alternative scales such as x-ray energy or neutron time-of-flight. The only wavelength and technique independent scale is in reciprocal space units or momentum transfer Q , which is historically rarely used in powder diffraction but very common in all other diffraction and optics techniques. The relation is
The most common powder X-ray diffraction (XRD) refinement technique used today is based on the method proposed in the 1960s by Hugo Rietveld . [ 2 ] The Rietveld method fits a calculated profile (including all structural and instrumental parameters) to experimental data. It employs the non-linear least squares method, and requires the reasonable initial approximation of many free parameters, including peak shape, unit cell dimensions and coordinates of all atoms in the crystal structure. Other parameters can be guessed while still being reasonably refined. In this way one can refine the crystal structure of a powder material from PXRD data. The successful outcome of the refinement is directly related to the quality of the data, the quality of the model (including initial approximations), and the experience of the user.
The Rietveld method is an incredibly powerful technique which began a remarkable era for powder XRD and materials science in general. Powder XRD is at heart a very basic experimental technique with diverse applications and experimental options. Despite being slightly limited by the one-dimensionality of PXRD data and limited resolution, powder XRD's power is astonishing. It is possible to determine the accuracy of a crystal structure model by fitting a profile to a 1D plot of observed intensity vs angle. Rietveld refinement requires a crystal structure model, and offers no way to come up with such a model on its own. However, it can be used to find structural details missing from a partial or complete ab initio structure solution, such as unit cell dimensions, phase quantities, crystallite sizes/shapes, atomic coordinates/bond lengths, micro strain in crystal lattice, texture, and vacancies. [ 3 ] [ 4 ]
Before exploring Rietveld refinement, it is necessary to establish a greater understanding of powder diffraction data and what information is encoded therein in order to establish a notion of how to create a model of a diffraction pattern, which is of course necessary in Rietveld refinement. A typical diffraction pattern can be described by the positions, shapes, and intensities of multiple Bragg reflections. Each of the three mentioned properties encodes some information relating to the crystal structure, the properties of the sample, and the properties of the instrumentation. Some of these contributions are shown in Table 1, below.
(a, b, c, α, β, γ)
(x, y, z, B, etc.)
The structure of a powder pattern is essentially defined by instrumental parameters and two crystallographic parameters: unit cell dimensions, and atomic content and coordination. So, a powder pattern model can be constructed as follows:
It is easy to model a powder pattern given the crystal structure of a material. The opposite, determining the crystal structure from a powder pattern, is much more complicated. A brief explanation of the process follows, though it is not the focus of this article.
To determine structure from a powder diffraction pattern the following steps should be taken. First, Bragg peak positions and intensities should be found by fitting to a peak shape function including background. Next, peak positions should be indexed and used to determine unit cell parameters, symmetry, and content. Third, peak intensities determine space group symmetry and atomic coordination. Finally, the model is used to refine all crystallographic and peak shape function parameters. To do this successfully, there is a requirement for excellent data which means good resolution, low background, and a large angular range.
For general application of the Rietveld method, irrespective of the software used, the observed Bragg peaks in a powder diffraction pattern are best described by the so-called peak shape function (PSF). The PSF is a convolution of three functions: the instrumental broadening Ω ( θ ) {\displaystyle \Omega (\theta )} , wavelength dispersion Λ ( θ ) {\displaystyle \Lambda (\theta )} , and the specimen function Ψ ( θ ) {\displaystyle \Psi (\theta )} , with the addition of a background function, b ( θ ) {\displaystyle b(\theta )} . It is represented as follows:
where ⊗ {\displaystyle \otimes } denotes convolution, which is defined for two functions f {\displaystyle f} and g {\displaystyle g} as an integral:
The instrumental function depends on the location and geometry of the source, monochromator, and sample. Wavelength function accounts for the distribution of the wavelengths in the source, and varies with the nature of the source and monochromatizing technique. The specimen function depends on several things. First is dynamic scattering, and secondly the physical properties of the sample such as crystallite size, and microstrain.
A short aside: unlike the other contributions, those from the specimen function can be interesting in materials characterization. As such, average crystallite size, τ {\displaystyle \tau } , and microstrain, ε {\displaystyle \varepsilon } , effects on Bragg peak broadening, β {\displaystyle \beta } (in radians), can be described as follows, where k {\displaystyle k} is a constant:
Returning to the peak shape function, the goal is to correctly model the Bragg peaks which exist in the observed powder diffraction data. In the most general form, the intensity, Y ( i ) {\displaystyle Y(i)} , of the i th {\displaystyle i^{\text{th}}} point ( 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} , where n {\displaystyle n} is the number of measured points) is the sum of the contributions y k {\displaystyle y_{k}} from the m {\displaystyle m} overlapped Bragg peaks ( 1 ≤ k ≤ m {\displaystyle 1\leq k\leq m} ), and the background, b ( i ) {\displaystyle b(i)} , and can be described as follows:
where I k {\displaystyle I_{k}} is the intensity of the k th {\displaystyle k^{\text{th}}} Bragg peak, and x i = 2 θ i − 2 θ k {\displaystyle x_{i}=2\theta _{i}-2\theta _{k}} . Since I k {\displaystyle I_{k}} is a multiplier, it is possible to analyze the behaviour of different normalized peak functions y ( x ) {\displaystyle y(x)} independently of peak intensity, under the condition that the integral over infinity of the PSF is unity. There are various functions that can be chosen to do this with varying degrees of complexity. The most basic functions used in this way to represent Bragg reflections are the Gauss, and Lorentzian functions. Most commonly though, is the pseudo-Voigt function, a weighted sum of the former two (the full Voigt profile is a convolution of the two, but is computationally more demanding). The pseudo-Voigt profile is the most common and is the basis for most other PSF's. The pseudo-Voigt function can be represented as:
where
and
are the Gaussian and Lorentzian contributions, respectively.
Thus,
where:
The pseudo-Voigt function, like the Gaussian and Lorentz functions, is a centrosymmetric function, and as such does not model asymmetry. This can be problematic for non-ideal powder XRD data, such as those collected at synchrotron radiation sources, which generally exhibit asymmetry due to the use of multiple focusing optics.
The Finger–Cox–Jephcoat function is similar to the pseudo-Voigt, but has better handling of asymmetry, which is treated in terms of axial divergence. The function is a convolution of pseudo-Voigt with the intersection of the diffraction cone and a finite receiving slit length using two geometrical parameters, S / L {\displaystyle S/L} , and H / L {\displaystyle H/L} , where S {\displaystyle S} and H {\displaystyle H} are the sample and the detector slit dimensions in the direction parallel to the goniometer axis, and L {\displaystyle L} is the goniometer radius. [ 7 ]
The shape of a powder diffraction reflection is influenced by the characteristics of the beam, the experimental
arrangement, and the sample size and shape. In the case of monochromatic neutron sources the convolution
of the various effects has been found to result in a reflex almost exactly Gaussian in shape. If this
distribution is assumed then the contribution of a given reflection to the profile y i {\displaystyle y_{i}} at position 2 θ i {\displaystyle 2\theta _{i}} is:
where H k {\displaystyle H_{k}} is the full width at half peak height (full-width half-maximum), 2 θ k {\displaystyle 2\theta _{k}} is the center of the reflex, and I k {\displaystyle I_{k}} is the calculated intensity of the reflex (determined from the structure factor ,
the Lorentz factor, and multiplicity of the reflection).
At very low diffraction angles the reflections may acquire an asymmetry due to the vertical divergence of the beam.
Rietveld used a semi-empirical correction factor, A s {\displaystyle A_{s}} to account for this asymmetry:
where P {\displaystyle P} is the asymmetry factor and s {\displaystyle s} is + 1 {\displaystyle +1} , 0 {\displaystyle 0} , or − 1 {\displaystyle -1} depending on the difference 2 θ i − 2 θ k {\displaystyle 2\theta _{i}-2\theta _{k}} being positive, zero, or negative respectively.
At a given position more than one diffraction peak may contribute to the profile. The intensity is simply the sum of all reflections contributing at the point 2 θ i {\displaystyle 2\theta _{i}} .
For a Bragg peak ( h k l ) {\displaystyle (hkl)} , the observed integrated intensity, I h k l {\displaystyle I_{hkl}} , as determined from numerical integration is
where j {\displaystyle j} is the total number of data points in the range of the Bragg peak. The integrated intensity depends on multiple factors, and can be expressed as the following product:
where:
The width of the diffraction peaks are found to broaden at higher Bragg angles. This angular dependency was originally represented by
where U {\displaystyle U} , V {\displaystyle V} , and W {\displaystyle W} are the half-width parameters and may be refined during the fit.
In powder samples there is a tendency for plate- or rod-like crystallites to align themselves along the axis of a cylindrical sample holder. In solid polycrystalline samples the production of the material may result in greater volume fraction of certain crystal orientations (commonly referred to as texture ). In such cases the reflex intensities will vary from that predicted for a completely random distribution. Rietveld allowed for moderate cases of the former by introducing a correction factor:
where I obs {\displaystyle I_{\text{obs}}} is the intensity expected for a random sample, G {\displaystyle G} is the preferred orientation parameter and α {\displaystyle \alpha } is the acute angle between the scattering vector and the normal of the crystallites.
The principle of the Rietveld method is to minimize a function M {\displaystyle M} which analyzes the difference between a calculated profile y calc {\displaystyle y^{\text{calc}}} and the observed data y obs {\displaystyle y^{\text{obs}}} . Rietveld defined such an equation as:
where W i {\displaystyle W_{i}} is the statistical weight and c {\displaystyle c} is an overall scale factor such that y calc = c y obs {\displaystyle y^{\text{calc}}=cy^{\text{obs}}} .
The fitting method used in Rietveld refinement is the non-linear least squares approach. A detailed derivation of non-linear least squares fitting will not be given here. Further detail can be found in Chapter 6 of Pecharsky and Zavalij's text 12 . There are a few things to note however. First, non-linear least squares fitting has an iterative nature for which convergence may be difficult to achieve if the initial approximation is too far from correct, or when the minimized function is poorly defined. The latter occurs when correlated parameters are being refined at the same time, which may result in divergence and instability of the minimization. This iterative nature also means that convergence to a solution does not occur immediately for the method is not exact. Each iteration depends on the results of the last which dictate the new set of parameters used for refinement. Thus, multiple refinement iterations are required to eventually converge to a possible solution.
Using non-linear least squares minimization, the following system is solved:
where Y i calc {\displaystyle Y_{i}^{\text{calc}}} is the calculated intensity and Y i obs {\displaystyle Y_{i}^{\text{obs}}} is the observed intensity of a point i {\displaystyle i} in the powder pattern, k {\displaystyle k} , is a scale factor, and n {\displaystyle n} is the number of measured data points. The minimized function is given by:
where w i {\displaystyle w_{i}} is the weight, and k {\displaystyle k} from the previous equation is unity (since k {\displaystyle k} is usually absorbed in the phase scale factor). The summation extends to all n {\displaystyle n} data points. Considering the peak shape functions and accounting for the overlapping of Bragg peaks because of the one-dimensionality of XRD data, the expanded form of the above equation for the case of a single phase measured with a single wavelength becomes:
where:
For a material that contains several phases ( p {\displaystyle p} ), the contribution from each is accounted for by modifying the above equation as follows:
It can easily be seen from the above equations that experimentally minimizing the background, which holds no useful structural information, is paramount for a successful profile fitting. For a low background, the functions are defined by contributions from the integrated intensities and peak shape parameters. But with a high background, the function being minimized depends on the adequacy of the background and not integrated intensities or peak shapes. Thus, a structure refinement cannot adequately yield structural information in the presence of a large background.
It is also worth noting the increased complexity brought forth by the presence of multiple phases. Each additional phase adds to the fitting, more Bragg peaks, and another scale factor tied to corresponding structural parameters, and peak shape. Mathematically they are easily accounted for, but practically, due to the finite accuracy and limited resolution of experimental data, each new phase can lower the quality and stability of the refinement. It is advantageous to use single phase materials when interested in finding precise structural parameters of a material. However, since the scale factors of each phase are determined independently, Rietveld refinement of multi phase materials can quantitatively examine the mixing ratio of each phase in the material.
Generally, the background is calculated as a Chebyshev polynomial . In GSAS and GSAS-II they appear as follows. Again, background is treated as a Chebyshev polynomial of the first kind ("Handbook of Mathematical Functions", M. Abramowitz and IA. Stegun, Ch. 22), with intensity given by:
where T j − 1 ′ {\displaystyle T'_{j-1}} are the coefficients of the Chebyshev polynomial taken from Table 22.3, pg. 795 of the Handbook. The coefficients have the form:
and the values for C m {\displaystyle C_{m}} are found in the Handbook. The angular range ( 2 θ {\displaystyle 2\theta } ) is converted to X {\displaystyle X} to make the Chebyshev polynomial orthogonal by
And, the orthogonal range for this function is –1 to +1.
Sample displacement: sample transparency and zero-shift corrections
Different correction factors have been developed to account for the specimen-detector displacement in Debye-Scherrer [ 8 ] (transmission) and Bragg-Brentano [ 9 ] (reflection) geometries. Correction factors have been published for flat tilted and non-tilted 2D detectors as well. [ 10 ] [ 11 ]
Traditional X-ray diffraction experiments place the detector normal to the incident X-ray beam, centering the detector on the beam or placing the incident beam at the bottom edge of the detector. In these experiments, a unique configuration is taken by aligning the detector such that the incident X-rays are still hitting the bottom-most edge of the detector. However, tilting the detector forward such that the topmost edge of it is directly above the sample provides a number of benefits such as higher quality and better resolved radial distribution functions than in typical traditional geometries, improved dynamic range, increased reciprocal space resolution per pixel for low angle scattering and access to higher reciprocal space values at lower energies. [ 12 ] Tilting the detector does not have much impact when carrying out SAXS experiments, however it has a huge impact on WAXS measurements in large-scale facilities ( synchrotrons ).
The appropriate function for the correction of 2 θ c {\displaystyle 2\theta _{c}} peak positions due to sample displacement depends upon the geometry of the instrument . Guzmán’s equation [ 11 ] provides the correction with flat detectors generalized to non-conventional geometries in both reflection and transmission modes:
2 θ c = 2 θ u + arctan ( Δ sin ( 4 θ u ) [ 1 + tan ( 2 θ u ) tan ( ϕ ) ] 2 [ D u − Δ cos 2 ( 2 θ u ) ( 1 + tan ( 2 θ u ) tan ( ϕ ) ) ] ) {\displaystyle 2{\theta }_{\rm {c}}=2{\theta }_{\rm {u}}+\arctan \left({\frac {\varDelta \,\sin(4{\theta }_{u})[1+\,\tan(2{\theta }_{u})\tan(\phi )]}{2[{D}_{\rm {u}}-{\varDelta \cos }^{2}(2{\theta }_{\rm {u}})(1+\,\tan(2{\theta }_{\rm {u}})\tan(\phi ))]}}\right)}
where ϕ {\displaystyle \phi } corresponds to the tilt ( angle ) of the detector, 2 θ c {\displaystyle 2\theta _{c}} and 2 θ u {\displaystyle 2\theta _{u}} are the corrected and uncorrected values of the diffraction angles, respectively, D u {\displaystyle D_{u}} is the uncorrected sample-to-detector distance, Δ {\displaystyle \Delta } is the correction for the sample-to-detector distance (corrected sample-to-detector distance D c = D u − Δ {\displaystyle D_{c}=D_{u}-\Delta } ).
Now, given the considerations of background, peak shape functions, integrated intensity, and non-linear least squares minimization, the parameters used in the Rietveld refinement which put these things together can be introduced. Below are the groups of independent least squares parameters generally refined in a Rietveld refinement.
Each Rietveld refinement is unique and there is no prescribed sequence of parameters to include in a refinement. It is up to the user to determine and find the best sequence of parameters for refinement. It is worth noting that it is rarely possible to refine all relevant variables simultaneously from the beginning of a refinement, nor near the end since the least squares fitting will be destabilized or lead to a false minimum. It is important for the user to determine a stopping point for a given refinement. Given the complexity of Rietveld refinement it is important to have a clear grasp of the system being studied (sample, and instrumentation) to ensure that results are accurate, realistic, and meaningful. High data quality, a large enough 2 θ {\displaystyle 2\theta } range, and a good model – to serve as the initial approximation in the least squares fitting – are necessary for a successful, reliable, and meaningful Rietveld refinement.
Since refinement depends on finding the best fit between a calculated and experimental pattern, it is important to have a numerical figure of merit quantifying the quality of the fit. Below are the figures of merit generally used to characterize the quality of a refinement. They provide insight to how well the model fits the observed data.
Profile residual (reliability factor):
Weighted profile residual:
Bragg residual:
Expected profile residual:
Goodness of fit:
It is worth mentioning that all but one ( R B {\displaystyle R_{B}} ) figure of merit include a contribution from the background. There are some concerns about the reliability of these figures, as well there is no threshold or accepted value which dictates what represents a good fit. [ 13 ] The most popular and conventional figure of merit used is the goodness of fit which should approach unity given a perfect fit, though this is rarely the case. In practice, the best way to assess quality is a visual analysis of the fit by plotting the difference between the observed and calculated data plotted on the same scale. | https://en.wikipedia.org/wiki/Rietveld_refinement |
A riffle is a shallow landform in a flowing channel. [ 1 ] Colloquially, it is a shallow place in a river where water flows quickly past rocks. [ 2 ] However, in geology a riffle has specific characteristics.
Riffles are almost always found to have a very low discharge compared to the flow that fills the channel [ 3 ] (approximately 10–20%), and as a result the water moving over a riffle appears shallow and fast, with a wavy, disturbed water surface. The water's surface over a riffle at low flow also has a much steeper slope than that over other in-channel landforms. Channel sections with a mean water surface slope of roughly 0.1 to 0.5% exhibit riffles, though they can occur in steeper or gentler sloping channels with coarser or finer bed materials, respectively. Except in the period after a flood (when fresh material is deposited on a riffle), the sediment on the riverbed in a riffle is usually much coarser than on that in any other in-channel landform.
Terrestrial valleys normally consist of channels – geometric depressions in the valley floor carved by flowing water – and overbank regions that include floodplains and terraces. Some channels have shapes and sizes that hardly change along the river ; these do not have riffles. However, many channels exhibit readily apparent changes in width, bed elevation, and slope. In these cases, scientists realized that the riverbed often tends to rise and fall with distance downstream relative to an average elevation of the river's slope. That led scientists to map the bed elevation down the deepest path in a channel, called the thalweg , to obtain a longitudinal profile. Then, the piecewise linear slope of the river is computed and removed to leave just the rise and fall of the elevation about the channel's trendline. According to the zero-crossing method, [ 4 ] [ 5 ] riffles are all the locations along the channel whose residual elevation is greater than zero. Because of the prevalence of this method for identifying and mapping riffles, riffles are often thought of as part of a paired sequence, alternating with pools (the lows between the riffles). However, modern topographic maps of rivers with meter-scale resolution reveal that rivers exhibit a diversity of in-channel landforms. [ 6 ]
For a long time, scientists have observed that, all other things being equal, riffles tend to be substantially wider than other in-channel landforms, [ 7 ] but only recently has there been high enough quality of river maps to confirm that this is true. [ 8 ] The physics mechanism that explains why this happens is called flow convergence routing. [ 9 ] [ 10 ] This mechanism may be used in river engineering to design self-sustainable riffles, [ 11 ] [ 12 ] given a suitable sediment supply and flow regime. When an in-channel landform is shallow and narrow, instead of shallow and wide, it is called a nozzle.
Riffles are very important to the life in a stream, and many aquatic species rely on them in one way or another. Many species of benthic macroinvertebrates rely on the highly oxygenated, fairly unsedimented waters present in a riffle. Many species of fish, including rare and endangered species use riffles to spawn in. Not only do fish spawn in and around riffles, they are also productive feeding grounds for fish, and in turn other predators that feed on fish. Riffles also serve to aerate the water, increasing the amount of dissolved oxygen in the body of water. [ 13 ] Water with high and relatively stable levels of dissolved oxygen is typically considered to be a healthy ecosystem because it can generally support greater biodiversity and total biomass .
Litter patches are a collection of leaves, coarse particulate organic matter , and small woody stems that can be found throughout riffles. [ 14 ] In riffles, these patches form at a velocity between 13 and 89 cm/sec, which allows for certain types of litter to be more abundant in riffles because they can stand up to the flow. [ 14 ] Leaf litter is most commonly found in riffles, and thus influenced the type of macroinvertebrate functional group is found in riffles, like stoneflies being the dominant shredder species found in riffles. [ 14 ] Other macroinvertebrates found in riffles are mayflies ( Ephemeroptera ). While, in general, the population densities are higher in riffles than pools, some groups like flies Diptera are somewhat less present in riffles, with a low density in riffles compared to pools. [ 15 ] Nonbiting midges ( Diptera , Chironomidae ) and aquatic worms (Class Oligochaeta ) are also located in riffles. [ 16 ]
Riffles also create a safe habitat for macroinvertebrates because of the varying depth, velocity, and substrate type found in the riffle. [ 17 ] Densities of macroinvertebrates vary riffle to riffle because of seasonality or the habitat surrounding the riffle, but macroinvertebrate makeup is fairly consistent. [ 17 ] While it can only be assumed that riffles can host a higher level of densities because of higher dissolved oxygen levels, there is a proven positive association between phosphate levels and macroinvertebrates in riffles, indicating that phosphate is an important nutrient for them. [ 17 ] Seasonality is important for macroinvertebrate densities, and is characterized by temperature, like summer and winter, or it can be characterized by wetness, like wet and dry seasons. Macroinvertebrates are found in lower abundance during the rainy or wet season due to the high, constant amount of water into the riffle changing the system’s temperature, water velocity, and the aquatic community structure. [ 16 ] Also, food, shelter and low flow rates during the dry season make it a more habitable time for higher densities of macroinvertebrates. [ 16 ]
Riffles provide important habitat and food production for various aquatic organisms , but humans have altered aquatic ecosystems worldwide through infrastructure and land use changes. [ 18 ] Human interference of stream or river flow decreases sediment sizes, resulting in fewer riffles. [ 19 ]
Specifically, weirs and other dams have reduced existing riffles by flattening the channel with smaller substrate, resulting in habitat fragmentation . [ 19 ] [ 20 ] Dam removal has increased in recent times and its effects on riffles vary and are complex, but generally, riffles may redevelop. [ 18 ] As these riffles develop, however, they often have a lower biodiversity than the pre-dam ecosystem but benefit aquatic biodiversity in the long term. [ 18 ] Following weir removal, riffle fish populations have increased in diversity and density, and these fish have moved upstream to inhabit new riffles that redevelop after dam removal. [ 18 ] [ 21 ] The importance of riffles in supporting diverse assemblages of aquatic biota within streams and rivers may contribute to the increasing trend of dam removal.
Human land use change , specifically development of land , can indirectly affect riffles and riffle quality. [ 22 ] Terrestrial vegetation, such as tree branches and leaf litter, contribute to the formation of riffles and stabilization of the ecosystem's channel, and as development reduces this vegetation, riffles may be diminished. [ 23 ] Species richness and diversity within riffles are susceptible to anthropogenic land use changes, and management options for restoring these riffles to increase aquatic biodiversity include removing sand and sedimentation and enhancing water flow, to offset impacts from land use change. [ 20 ] | https://en.wikipedia.org/wiki/Riffle |
Riftia pachyptila , commonly known as the giant tube worm and less commonly known as the giant beardworm , is a marine invertebrate in the phylum Annelida [ 1 ] (formerly grouped in phylum Pogonophora and Vestimentifera ) related to tube worms commonly found in the intertidal and pelagic zones . R. pachyptila lives on the floor of the Pacific Ocean near hydrothermal vents . The vents provide a natural ambient temperature in their environment ranging from 2 to 30 °C, [ 2 ] and this organism can tolerate extremely high hydrogen sulfide levels. These worms can reach a length of 3 m (9 ft 10 in), [ 3 ] and their tubular bodies have a diameter of 4 cm (1.6 in).
Its common name "giant tube worm" is, however, also applied to the largest living species of shipworm , Kuphus polythalamius , which despite the name "worm", is a bivalve mollusc rather than an annelid .
R. pachyptila was discovered in 1977 on an expedition of the American bathyscaphe DSV Alvin to the Galápagos Rift led by geologist Jack Corliss . [ 4 ] The discovery was unexpected, as the team was studying hydrothermal vents and no biologists were included in the expedition. Many of the species found living near hydrothermal vents during this expedition had never been seen before.
At the time, the presence of thermal springs near the midoceanic ridges was known. Further research uncovered aquatic life in the area, despite the high temperature (around 350–380 °C). [ 5 ] [ 6 ]
Many samples were collected, including bivalves, polychaetes, large crabs, and R. pachyptila . [ 7 ] [ 8 ] It was the first time that species was observed.
R. pachyptila develops from a free-swimming, pelagic , nonsymbiotic trochophore larva, which enters juvenile ( metatrochophore ) development, becoming sessile , and subsequently acquiring symbiotic bacteria. [ 9 ] [ 10 ] The symbiotic bacteria, on which adult worms depend for sustenance, are not present in the gametes, but are acquired from the environment through the skin in a process akin to an infection. The digestive tract transiently connects from a mouth at the tip of the ventral medial process to a foregut, midgut, hindgut, and anus and was previously thought to have been the method by which the bacteria are introduced into adults. After symbionts are established in the midgut, they undergo substantial remodelling and enlargement to become the trophosome , while the remainder of the digestive tract has not been detected in adult specimens. [ 11 ]
Isolating the vermiform body from white chitinous tube, a small difference exists from the classic three subdivisions typical of phylum Pogonophora : [ 12 ] the prosoma, the mesosoma , and the metasoma .
The first body region is the vascularized branchial plume , which is bright red due to the presence of hemoglobin that contain up to 144 globin chains (each presumably including associated heme structures). These tube worm hemoglobins are remarkable for carrying oxygen in the presence of sulfide, without being inhibited by this molecule, as hemoglobins in most other species are. [ 13 ] [ 14 ] The plume provides essential nutrients to bacteria living inside the trophosome . If the animal perceives a threat or is touched, it retracts the plume and the tube is closed due to the obturaculum, a particular operculum that protects and isolates the animal from the external environment. [ 15 ]
The second body region is the vestimentum , formed by muscle bands, having a winged shape, and it presents the two genital openings at the end. [ 16 ] [ 17 ] The heart, extended portion of dorsal vessel, enclose the vestimentum. [ 18 ]
In the middle part, the trunk or third body region, is full of vascularized solid tissue, and includes body wall, gonads , and the coelomic cavity. Here is located also the trophosome, spongy tissue where a billion symbiotic, thioautotrophic bacteria and sulfur granules are found. [ 19 ] [ 20 ] Since the mouth, digestive system, and anus are missing, the survival of R. pachyptila is dependent on this mutualistic symbiosis. [ 21 ] This process, known as chemosynthesis , was recognized within the trophosome by Colleen Cavanaugh . [ 21 ]
The soluble hemoglobins, present in the tentacles, are able to bind O 2 and H 2 S , which are necessary for chemosynthetic bacteria. Due to the capillaries, these compounds are absorbed by bacteria. [ 22 ] During the chemosynthesis, the mitochondrial enzyme rhodanase catalyzes the disproportionation reaction of the thiosulfate anion S 2 O 3 2- to sulfur S and sulfite SO 3 2- . [ 23 ] [ 24 ] The R. pachyptila ’s bloodstream is responsible for absorption of the O 2 and nutrients such as carbohydrates.
Nitrate and nitrite are toxic, but are required for biosynthetic processes. The chemosynthetic bacteria within the trophosome convert nitrate to ammonium ions, which then are available for production of amino acids in the bacteria, which are in turn released to the tube worm. To transport nitrate to the bacteria, R. pachyptila concentrates nitrate in its blood, to a concentration 100 times more concentrated than the surrounding water. The exact mechanism of R. pachyptila ’s ability to withstand and concentrate nitrate is still unknown. [ 14 ]
In the posterior part, the fourth body region, is the opistosome , which anchors the animal to the tube and is used for the storage of waste from bacterial reactions. [ 25 ]
The discovery of bacterial invertebrate chemoautotrophic symbiosis , particularly in vestimentiferan tubeworms R. pachyptila [ 21 ] and then in vesicomyid clams and mytilid mussels revealed the chemoautotrophic potential of the hydrothermal vent tube worm. [ 26 ] Scientists discovered a remarkable source of nutrition that helps to sustain the conspicuous biomass of invertebrates at vents. [ 26 ] Many studies focusing on this type of symbiosis revealed the presence of chemoautotrophic, endosymbiotic, sulfur-oxidizing bacteria mainly in R. pachyptila , [ 27 ] which inhabits extreme environments and is adapted to the particular composition of the mixed volcanic and sea waters. [ 28 ] This special environment is filled with inorganic metabolites, essentially carbon , nitrogen , oxygen , and sulfur . In its adult phase, R. pachyptila lacks a digestive system. To provide its energetic needs, it retains those dissolved inorganic nutrients (sulfide, carbon dioxide, oxygen, nitrogen) into its plume and transports them through a vascular system to the trophosome , which is suspended in paired coelomic cavities and is where the intracellular symbiotic bacteria are found. [ 20 ] [ 29 ] [ 30 ] The trophosome [ 31 ] is a soft tissue that runs through almost the whole length of the tube's coelom. It retains a large number of bacteria on the order of 10 9 bacteria per gram of fresh weight. [ 32 ] Bacteria in the trophosome are retained inside bacteriocytes, thereby having no contact with the external environment. Thus, they rely on R. pachyptila for the assimilation of nutrients needed for the array of metabolic reactions they employ and for the excretion of waste products of carbon fixation pathways. At the same time, the tube worm depends completely on the microorganisms for the byproducts of their carbon fixation cycles that are needed for its growth.
Initial evidence for a chemoautotrophic symbiosis in R. pachyptila came from microscopic and biochemical analyses showing Gram-negative bacteria packed within a highly vascularized organ in the tubeworm trunk called the trophosome . [ 21 ] Additional analyses involving stable isotope , [ 33 ] enzymatic , [ 34 ] [ 26 ] and physiological [ 35 ] characterizations confirmed that the end symbionts of R. pachyptila oxidize reduced-sulfur compounds to synthesize ATP for use in autotrophic carbon fixation through the Calvin cycle . The host tubeworm enables the uptake and transport of the substrates required for thioautotrophy, which are HS − , O 2 , and CO 2 , receiving back a portion of the organic matter synthesized by the symbiont population. The adult tubeworm, given its inability to feed on particulate matter and its entire dependency on its symbionts for nutrition, the bacterial population is then the primary source of carbon acquisition for the symbiosis. Discovery of bacterial–invertebrate chemoautotrophic symbioses, initially in vestimentiferan tubeworms [ 21 ] [ 26 ] and then in vesicomyid clams and mytilid mussels, [ 26 ] pointed to an even more remarkable source of nutrition sustaining the invertebrates at vents .
A wide range of bacterial diversity is associated with symbiotic relationships with R. pachyptila . Many bacteria belong to the phylum Campylobacterota (formerly class Epsilonproteobacteria) [ 36 ] as supported by the recent discovery in 2016 of the new species Sulfurovum riftiae belonging to the phylum Campylobacterota, family Helicobacteraceae isolated from R. pachyptila collected from the East Pacific Rise . [ 37 ] Other symbionts belong to the class Delta -, Alpha - and Gammaproteobacteria . [ 36 ] The Candidatus Endoriftia persephone (Gammaproteobacteria) is a facultative R. pachyptila symbiont and has been shown to be a mixotroph , thereby exploiting both Calvin Benson cycle and reverse TCA cycle (with an unusual ATP citrate lyase ) according to availability of carbon resources and whether it is free living in the environment or inside a eukaryotic host. The bacteria apparently prefer a heterotrophic lifestyle when carbon sources are available. [ 31 ]
Evidence based on 16S rRNA analysis affirms that R. pachyptila chemoautotrophic bacteria belong to two different clades: Gammaproteobacteria [ 38 ] [ 20 ] and Campylobacterota (e.g. Sulfurovum riftiae ) [ 37 ] that get energy from the oxidation of inorganic sulfur compounds such as hydrogen sulfide (H 2 S, HS − , S 2- ) to synthesize ATP for carbon fixation via the Calvin cycle . [ 20 ] Unfortunately, most of these bacteria are still uncultivable. Symbiosis works so that R. pachyptila provides nutrients such as HS − , O 2 , CO 2 to bacteria, and in turn it receives organic matter from them. Thus, because of lack of a digestive system, R. pachyptila depends entirely on its bacterial symbiont to survive. [ 39 ] [ 40 ]
In the first step of sulfide-oxidation, reduced sulfur (HS − ) passes from the external environment into R. pachyptila blood, where, together with O 2 , it is bound by hemoglobin, forming the complex Hb-O 2 -HS − and then it is transported to the trophosome, where bacterial symbionts reside. Here, HS − is oxidized to elemental sulfur (S 0 ) or to sulfite (SO 3 2- ). [ 20 ]
In the second step, the symbionts make sulfite-oxidation by the "APS pathway", to get ATP. In this biochemical pathway, AMP reacts with sulfite in the presence of the enzyme APS reductase, giving APS (adenosine 5'-phosphosulfate). Then, APS reacts with the enzyme ATP sulfurylase in presence of pyrophosphate (PPi) giving ATP ( substrate-level phosphorylation ) and sulfate (SO 4 2- ) as end products. [ 20 ] In formulas:
The electrons released during the entire sulfide-oxidation process enter in an electron transport chain, yielding a proton gradient that produces ATP ( oxidative phosphorylation ). Thus, ATP generated from oxidative phosphorylation and ATP produced by substrate-level phosphorylation become available for CO 2 fixation in Calvin cycle, whose presence has been demonstrated by the presence of two key enzymes of this pathway: phosphoribulokinase and RubisCO . [ 26 ] [ 41 ]
To support this unusual metabolism, R. pachyptila has to absorb all the substances necessary for both sulfide-oxidation and carbon fixation, that is: HS − , O 2 and CO 2 and other fundamental bacterial nutrients such as N and P. This means that the tubeworm must be able to access both oxic and anoxic areas.
Oxidation of reduced sulfur compounds requires the presence of oxidized reagents such as oxygen and nitrate . Hydrothermal vents are characterized by conditions of high hypoxia . In hypoxic conditions, sulfur -storing organisms start producing hydrogen sulfide . Therefore, the production of in H 2 S in anaerobic conditions is common among thiotrophic symbiosis. H 2 S can be damaging for some physiological processes as it inhibits the activity of cytochrome c oxidase , consequentially impairing oxidative phosphorylation . In R. pachyptila the production of hydrogen sulfide starts after 24h of hypoxia. In order to avoid physiological damage some animals, including Riftia pachyptila are able to bind H 2 S to haemoglobin in the blood to eventually expel it in the surrounding environment .
Unlike metazoans , which respire carbon dioxide as a waste product, R. pachyptila -symbiont association has a demand for a net uptake of CO 2 instead, as a cnidarian -symbiont associations. [ 42 ] Ambient deep-sea water contains an abundant amount of inorganic carbon in the form of bicarbonate HCO 3 − , but it is actually the chargeless form of inorganic carbon, CO 2 , that is easily diffusible across membranes. The low partial pressures of CO 2 in the deep-sea environment is due to the seawater alkaline pH and the high solubility of CO 2 , yet the pCO 2 of the blood of R. pachyptila may be as much as two orders of magnitude greater than the pCO 2 of deep-sea water. [ 42 ]
CO 2 partial pressures are transferred to the vicinity of vent fluids due to the enriched inorganic carbon content of vent fluids and their lower pH. [ 20 ] CO 2 uptake in the worm is enhanced by the higher pH of its blood (7.3–7.4), which favors the bicarbonate ion and thus promotes a steep gradient across which CO 2 diffuses into the vascular blood of the plume. [ 43 ] [ 20 ] The facilitation of CO 2 uptake by high environmental pCO 2 was first inferred based on measures of elevated blood and coelomic fluid pCO 2 in tubeworms, and was subsequently demonstrated through incubations of intact animals under various pCO 2 conditions. [ 30 ]
Once CO 2 is fixed by the symbionts , it must be assimilated by the host tissues. The supply of fixed carbon to the host is transported via organic molecules from the trophosome in the hemolymph , but the relative importance of translocation and symbiont digestion is not yet known. [ 30 ] [ 44 ] Studies proved that within 15 min, the label first appears in symbiont-free host tissues, and that indicates a significant amount of release of organic carbon immediately after fixation. After 24 h, labeled carbon is clearly evident in the epidermal tissues of the body wall. Results of the pulse-chase autoradiographic experiments were also evident with ultrastructural evidence for digestion of symbionts in the peripheral regions of the trophosome lobules. [ 44 ] [ 45 ]
In deep-sea hydrothermal vents, sulfide and oxygen are present in different areas. Indeed, the reducing fluid of hydrothermal vents is rich in sulfide, but poor in oxygen, whereas sea water is richer in dissolved oxygen. Moreover, sulfide is immediately oxidized by dissolved oxygen to form partly, or totally, oxidized sulfur compounds like thiosulfate (S 2 O 3 2- ) and ultimately sulfate (SO 4 2- ), respectively less, or no longer, usable for microbial oxidation metabolism. [ 46 ] This causes the substrates to be less available for microbial activity, thus bacteria are constricted to compete with oxygen to get their nutrients. In order to avoid this issue, several microbes have evolved to make symbiosis with eukaryotic hosts. [ 47 ] [ 20 ] In fact, R. pachyptila is able to cover the oxic and anoxic areas to get both sulfide and oxygen [ 48 ] [ 49 ] [ 50 ] thanks to its hemoglobin that can bind sulfide reversibly and apart from oxygen by functional binding sites determined to be zinc ions embedded in the A2 chains of the hemoglobins. [ 51 ] [ 52 ] [ 53 ] and then transport it to the trophosome , where bacterial metabolism can occur. It has also been suggested that cysteine residues are involved in this process. [ 54 ] [ 55 ] [ 56 ]
The acquisition of a symbiont by a host can occur in these ways:
Evidence suggests that R. pachyptila acquires its symbionts through its environment. In fact, 16S rRNA gene analysis showed that vestimentiferan tubeworms belonging to three different genera: Riftia , Oasisia , and Tevnia , share the same bacterial symbiont phylotype . [ 57 ] [ 58 ] [ 59 ] [ 60 ] [ 61 ]
This proves that R. pachyptila takes its symbionts from a free-living bacterial population in the environment. Other studies also support this thesis, because analyzing R. pachyptila eggs, 16S rRNA belonging to the symbiont was not found, showing that the bacterial symbiont is not transmitted by vertical transfer. [ 62 ]
Another proof to support the environmental transfer comes from several studies conducted in the late 1990s. [ 63 ] PCR was used to detect and identify a R. pachyptila symbiont gene whose sequence was very similar to the fliC gene that encodes some primary protein subunits ( flagellin ) required for flagellum synthesis. Analysis showed that R. pachyptila symbiont has at least one gene needed for flagellum synthesis. Hence, the question arose as to the purpose of the flagellum. Flagellar motility would be useless for a bacterial symbiont transmitted vertically, but if the symbiont came from the external environment, then a flagellum would be essential to reach the host organism and to colonize it. Indeed, several symbionts use this method to colonize eukaryotic hosts. [ 64 ] [ 65 ] [ 66 ] [ 67 ]
Thus, these results confirm the environmental transfer of R. pachyptila symbiont.
R. pachyptila [ 68 ] is a dioecious vestimentiferan. [ 69 ] Individuals of this species are sessile and are found clustered together around deep-sea hydrothermal vents of the East Pacific Rise and the Galapagos Rift. [ 70 ] The size of a patch of individuals surrounding a vent is within the scale of tens of metres. [ 71 ]
The male's spermatozoa are thread-shaped and are composed of three distinct regions: the acrosome (6 μm), the nucleus (26 μm) and the tail (98 μm). Thus, the single spermatozoa is about 130 μm long overall, with a diameter of 0.7 μm, which becomes narrower near the tail area, reaching 0.2 μm. The sperm is arranged into an agglomeration of around 340–350 individual spermatozoa that create a torch-like shape. The cup part is made up of acrosomes and nucleus, while the handle is made up by the tails. The spermatozoa in the package are held together by fibrils . Fibrils also coat the package itself to ensure cohesion. [ citation needed ]
The large ovaries of females run within the gonocoel along the entire length of the trunk and are ventral to the trophosome. Eggs at different maturation stages can be found in the middle area of the ovaries, and depending on their developmental stage, are referred to as: oogonia , oocytes , and follicular cells . When the oocytes mature, they acquire protein and lipid yolk granules. [ citation needed ]
Males release their sperm into sea water. While the released agglomerations of spermatozoa, referred to as spermatozeugmata, do not remain intact for more than 30 seconds in laboratory conditions, they may maintain integrity for longer periods of time in specific hydrothermal vent conditions. Usually, the spermatozeugmata swim into the female's tube. Movement of the cluster is conferred by the collective action of each spermatozoon moving independently. Reproduction has also been observed involving only a single spermatozoon reaching the female's tube. Generally, fertilization in R. pachyptila is considered internal. However, some argue that, as the sperm is released into sea water and only afterwards reaches the eggs in the oviducts , it should be defined as internal-external. [ citation needed ]
R. pachyptila is completely dependent on the production of volcanic gases and the presence of sulfide -oxidizing bacteria. Therefore, its metapopulation distribution is profoundly linked to volcanic and tectonic activity that create active hydrothermal vent sites with a patchy and ephemeral distribution. The distance between active sites along a rift or adjacent segments can be very high, reaching hundreds of km. [ 70 ] This raises the question regarding larval dispersal . R. pachytpila is capable of larval dispersal across distances of 100 to 200 km [ 70 ] and cultured larvae show to be viable for 38 days. [ 72 ] Though dispersal is considered to be effective, the genetic variability observed in R. pachyptila metapopulation is low compared to other vent species. This may be due to high extinction events and colonization events, as R. pachyptila is one of the first species to colonize a new active site. [ 70 ]
The endosymbionts of R. pachyptila are not passed to the fertilized eggs during spawning , but are acquired later during the larval stage of the vestimentiferan worm. R. pachyptila planktonic larvae that are transported through sea-bottom currents until they reach active hydrothermal vents sites, are referred to as trophocores. The trophocore stage lacks endosymbionts, which are acquired once larvae settle in a suitable environment and substrate. Free-living bacteria found in the water column are ingested randomly and enter the worm through a ciliated opening of the branchial plume. This opening is connected to the trophosome through a duct that passes through the brain. Once the bacteria are in the gut, the ones that are beneficial to the individual, namely sulfide- oxidizing strains are phaghocytized by epithelial cells found in the midgut are then retained. Bacteria that do not represent possible endosymbionts are digested. This raises questions as to how R. pachyptila manages to discern between essential and nonessential bacterial strains. The worm's ability to recognise a beneficial strain, as well as preferential host-specific infection by bacteria have been both suggested as being the drivers of this phenomenon. [ 11 ]
R. pachyptila has the fastest growth rate of any known marine invertebrate. These organisms have been known to colonize a new site, grow to sexual maturity, and increase in length to 4.9 feet (1.5 m) in less than two years. [ 73 ]
Because of the peculiar environment in which R. pachyptila thrives, this species differs greatly from other deep-sea species that do not inhabit hydrothermal vents sites; the activity of diagnostic enzymes for glycolysis , citric acid cycle and transport of electrons in the tissues of R. pachyptila is very similar to the activity of these enzymes in the tissues of shallow-living animals. This contrasts with the fact that deep-sea species usually show very low metabolic rates , which in turn suggests that low water temperature and high pressure in the deep sea do not necessarily limit the metabolic rate of animals and that hydrothermal vents sites display characteristics that are completely different from the surrounding environment, thereby shaping the physiology and biological interactions of the organisms living in these sites. [ 32 ] | https://en.wikipedia.org/wiki/Riftia |
Rig Expert Ukraine Ltd is a manufacturer of ham and PMR Two-way radio RF antenna analysis [ 1 ] and antenna tuning [ 2 ] equipment. The company was founded in 2003 and is headquartered in Kyiv , Ukraine .
The AA-30, AA-54 & AA-170 are almost the same product except for the frequency range. Similarly, the AA-600, AA-1000 & AA-1400 are the same product except for the different frequency range.
This article about a telecommunications company is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/RigExpert |
In mathematics and physics , the right-hand rule is a convention and a mnemonic , utilized to define the orientation of axes in three-dimensional space and to determine the direction of the cross product of two vectors , as well as to establish the direction of the force on a current-carrying conductor in a magnetic field .
The various right- and left-hand rules arise from the fact that the three axes of three-dimensional space have two possible orientations. This can be seen by holding your hands together with palms up and fingers curled. If the curl of the fingers represents a movement from the first or x-axis to the second or y-axis, then the third or z-axis can point along either right thumb or left thumb.
The right-hand rule dates back to the 19th century when it was implemented as a way for identifying the positive direction of coordinate axes in three dimensions. William Rowan Hamilton , recognized for his development of quaternions , a mathematical system for representing three-dimensional rotations, is often attributed with the introduction of this convention. In the context of quaternions, the Hamiltonian product of two vector quaternions yields a quaternion comprising both scalar and vector components. [ 1 ] Josiah Willard Gibbs recognized that treating these components separately, as dot and cross product, simplifies vector formalism. Following a substantial debate, [ 2 ] the mainstream shifted from Hamilton's quaternionic system to Gibbs' three-vectors system. This transition led to the prevalent adoption of the right-hand rule in the contemporary contexts. In specific, Gibbs outlines his intention for establishing a right-handed coordinate system in his pamphlet on vector analysis. [ 3 ] In Article 11 of the pamphlet, Gibbs states "The letters i {\displaystyle i} , j {\displaystyle j} , and k {\displaystyle k} are appropriated to the designation of a normal system of unit vectors , i.e., three unit vectors, each of which is at right angles to the other two ... We shall always suppose that k {\displaystyle k} is on the side of the i − j {\displaystyle i-j} plane on which a rotation from i {\displaystyle i} to j {\displaystyle j} (through one right angle) appears counter-clockwise." While Gibbs did not use the term right-handed in his discussion, his instructions for defining the normal coordinate orientation are a clear statement of his intent for coordinates that follow the right-hand rule.
The cross product of vectors a → {\displaystyle {\vec {a}}} and b → {\displaystyle {\vec {b}}} is a vector perpendicular to the plane spanned by a → {\displaystyle {\vec {a}}} and b → {\displaystyle {\vec {b}}} with the direction given by the right-hand rule : If you put the index of your right hand on a → {\displaystyle {\vec {a}}} and the middle finger on b → {\displaystyle {\vec {b}}} , then the thumb points in the direction of a → × b → {\displaystyle {\vec {a}}\times {\vec {b}}} . [ 4 ]
The right-hand rule in physics was introduced in the late 19th century by John Fleming in his book Magnets and Electric Currents. [ 5 ] Fleming described the orientation of the induced electromotive force by referencing the motion of the conductor and the direction of the magnetic field in the following depiction: “If a conductor, represented by the middle finger, be moved in a field of magnetic flux , the direction of which is represented by the direction of the forefinger , the direction of this motion, being in the direction of the thumb, then the electromotive force set up in it will be indicated by the direction in which the middle finger points." [ 5 ]
For right-handed coordinates, if the thumb of a person's right hand points along the z -axis in the positive direction (third coordinate vector), then the fingers curl from the positive x -axis (first coordinate vector) toward the positive y -axis (second coordinate vector). When viewed at a position along the positive z -axis, the ¼ turn from the positive x- to the positive y- axis is counter-clockwise .
For left-handed coordinates, the above description of the axes is the same, except using the left hand; and the ¼ turn is clockwise .
Interchanging the labels of any two axes reverses the handedness. Reversing the direction of one axis (or three axes) also reverses the handedness. Reversing two axes amounts to a 180° rotation around the remaining axis, also preserving the handedness. These operations can be composed to give repeated changes of handedness. [ 6 ] (If the axes do not have a positive or negative direction, then handedness has no meaning.)
In mathematics, a rotating body is commonly represented by a pseudovector along the axis of rotation . The length of the vector gives the speed of rotation and the direction of the axis gives the direction of rotation according to the right-hand rule: right fingers curled in the direction of rotation and the right thumb pointing in the positive direction of the axis. This allows some simple calculations using the vector cross-product. No part of the body is moving in the direction of the axis arrow. If the thumb is pointing north, Earth rotates according to the right-hand rule ( prograde motion ). This causes the Sun, Moon, and stars to appear to revolve westward according to the left-hand rule.
A helix is a curved line formed by a point rotating around a center while the center moves up or down the z -axis. Helices are either right or left handed with curled fingers giving the direction of rotation and thumb giving the direction of advance along the z -axis.
The threads of a screw are helical and therefore screws can be right- or left-handed. To properly fasten or unfasten a screw, one applies the above rules: if a screw is right-handed, pointing one's right thumb in the direction of the hole and turning in the direction of the right hand's curled fingers (i.e. clockwise) will fasten the screw, while pointing away from the hole and turning in the new direction (i.e. counterclockwise) will unfasten the screw.
In vector calculus , it is necessary to relate a normal vector of a surface to the boundary curve of the surface. Given a surface S with a specified normal direction n̂ (a choice of "upward direction" with respect to S ), the boundary curve C around S is defined to be positively oriented provided that the right thumb points in the direction of n̂ and the fingers curl along the orientation of the bounding curve C .
Ampère's right-hand grip rule, [ 7 ] also called the right-hand screw rule , coffee-mug rule or the corkscrew-rule; is used either when a vector (such as the Euler vector ) must be defined to represent the rotation of a body, a magnetic field, or a fluid, or vice versa, when it is necessary to define a rotation vector to understand how rotation occurs. It reveals a connection between the current and the magnetic field lines in the magnetic field that the current created. Ampère was inspired by fellow physicist Hans Christian Ørsted , who observed that needles swirled when in the proximity of an electric current -carrying wire and concluded that electricity could create magnetic fields .
This rule is used in two different applications of Ampère's circuital law :
The cross product of two vectors is often taken in physics and engineering. For example, as discussed above, the force exerted on a moving charged particle when moving in a magnetic field B is given by the magnetic term of Lorentz force:
The direction of the cross product may be found by application of the right-hand rule as follows:
For example, for a positively charged particle moving to the north, in a region where the magnetic field points west, the resultant force points up. [ 6 ]
The right-hand rule has widespread use in physics . A list of physical quantities whose directions are related by the right-hand rule is given below. (Some of these are related only indirectly to cross products, and use the second form.)
Unlike most mathematical concepts, the meaning of a right-handed coordinate system cannot be expressed in terms of any mathematical axioms . Rather, the definition depends on chiral phenomena in the physical world, for example the culturally transmitted meaning of right and left hands, a majority human population with dominant right hand, or certain phenomena involving the weak force . | https://en.wikipedia.org/wiki/Right-hand_rule |
Right To Know is a non profit support project for those who discover via genealogical genetic testing that their lineage is not what they had supposed it to be due to family secrets and misattributed parentage, thus raising existential issues of adoption , race , ethnicity , culture , rape , etc. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Right_To_Know |
Right Interior Exterior
Adjacent Vertical Complementary Supplementary
Dihedral
In geometry and trigonometry , a right angle is an angle of exactly 90 degrees or π {\displaystyle \pi } / 2 radians [ 1 ] corresponding to a quarter turn . [ 2 ] If a ray is placed so that its endpoint is on a line and the adjacent angles are equal, then they are right angles. [ 3 ] The term is a calque of Latin angulus rectus ; here rectus means "upright", referring to the vertical perpendicular to a horizontal base line.
Closely related and important geometrical concepts are perpendicular lines, meaning lines that form right angles at their point of intersection, and orthogonality , which is the property of forming right angles, usually applied to vectors . The presence of a right angle in a triangle is the defining factor for right triangles , [ 4 ] making the right angle basic to trigonometry.
The meaning of right in right angle possibly refers to the Latin adjective rectus 'erect, straight, upright, perpendicular'. A Greek equivalent is orthos 'straight; perpendicular' (see orthogonality ).
A rectangle is a quadrilateral with four right angles. A square has four right angles, in addition to equal-length sides.
The Pythagorean theorem states how to determine when a triangle is a right triangle .
In Unicode , the symbol for a right angle is U+221F ∟ RIGHT ANGLE ( ∟ ). It should not be confused with the similarly shaped symbol U+231E ⌞ BOTTOM LEFT CORNER ( ⌞, ⌞ ). Related symbols are U+22BE ⊾ RIGHT ANGLE WITH ARC ( ⊾ ), U+299C ⦜ RIGHT ANGLE VARIANT WITH SQUARE ( ⦜ ), and U+299D ⦝ MEASURED RIGHT ANGLE WITH DOT ( ⦝ ). [ 5 ]
In diagrams, the fact that an angle is a right angle is usually expressed by adding a small right angle that forms a square with the angle in the diagram, as seen in the diagram of a right triangle (in British English, a right-angled triangle) to the right. The symbol for a measured angle, an arc, with a dot, is used in some European countries, including German-speaking countries and Poland, as an alternative symbol for a right angle. [ 6 ]
Right angles are fundamental in Euclid's Elements . They are defined in Book 1, definition 10, which also defines perpendicular lines. Definition 10 does not use numerical degree measurements but rather touches at the very heart of what a right angle is, namely two straight lines intersecting to form two equal and adjacent angles. [ 7 ] The straight lines which form right angles are called perpendicular. [ 8 ] Euclid uses right angles in definitions 11 and 12 to define acute angles (those smaller than a right angle) and obtuse angles (those greater than a right angle). [ 9 ] Two angles are called complementary if their sum is a right angle. [ 10 ]
Book 1 Postulate 4 states that all right angles are equal, which allows Euclid to use a right angle as a unit to measure other angles with. Euclid's commentator Proclus gave a proof of this postulate using the previous postulates, but it may be argued that this proof makes use of some hidden assumptions. Saccheri gave a proof as well but using a more explicit assumption. In Hilbert 's axiomatization of geometry this statement is given as a theorem, but only after much groundwork. One may argue that, even if postulate 4 can be proven from the preceding ones, in the order that Euclid presents his material it is necessary to include it since without it postulate 5, which uses the right angle as a unit of measure, makes no sense. [ 11 ]
A right angle may be expressed in different units:
Throughout history, carpenters and masons have known a quick way to confirm if an angle is a true right angle. It is based on the Pythagorean triple (3, 4, 5) and the rule of 3-4-5. From the angle in question, running a straight line along one side exactly three units in length, and along the second side exactly four units in length, will create a hypotenuse (the longer line opposite the right angle that connects the two measured endpoints) of exactly five units in length.
Thales' theorem states that an angle inscribed in a semicircle (with a vertex on the semicircle and its defining rays going through the endpoints of the semicircle) is a right angle.
Two application examples in which the right angle and the Thales' theorem are included (see animations).
The solid angle subtended by an octant of a sphere (the spherical triangle with three right angles) equals π /2 sr . [ 12 ] | https://en.wikipedia.org/wiki/Right_angle |
Right ascension (abbreviated RA ; symbol α ) is the angular distance of a particular point measured eastward along the celestial equator from the Sun at the March equinox to the ( hour circle of the) point in question above the Earth. [ 1 ] When paired with declination , these astronomical coordinates specify the location of a point on the celestial sphere in the equatorial coordinate system .
An old term, right ascension ( Latin : ascensio recta ) [ 2 ] refers to the ascension , or the point on the celestial equator that rises with any celestial object as seen from Earth 's equator , where the celestial equator intersects the horizon at a right angle . It contrasts with oblique ascension , the point on the celestial equator that rises with any celestial object as seen from most latitudes on Earth, where the celestial equator intersects the horizon at an oblique angle . [ 3 ]
Right ascension is the celestial equivalent of terrestrial longitude . Both right ascension and longitude measure an angle from a primary direction (a zero point) on an equator . Right ascension is measured from the Sun at the March equinox i.e. the First Point of Aries , which is the place on the celestial sphere where the Sun crosses the celestial equator from south to north at the March equinox and is currently located in the constellation Pisces . Right ascension is measured continuously in a full circle from that alignment of Earth and Sun in space, that equinox, the measurement increasing towards the east. [ 4 ]
As seen from Earth (except at the poles), objects noted to have 12 h RA are longest visible (appear throughout the night) at the March equinox; those with 0 h RA (apart from the sun) do so at the September equinox. On those dates at midnight, such objects will reach ("culminate" at) their highest point (their meridian). How high depends on their declination; if 0° declination (i.e. on the celestial equator ) then at Earth's equator they are directly overhead (at zenith ).
Any angular unit could have been chosen for right ascension, but it is customarily measured in hours ( h ), minutes ( m ), and seconds ( s ), with 24 h being equivalent to a full circle . Astronomers have chosen this unit to measure right ascension because they measure a star's location by timing its passage through the highest point in the sky as the Earth rotates . The line which passes through the highest point in the sky, called the meridian , is the projection of a longitude line onto the celestial sphere. Since a complete circle contains 24 h of right ascension or 360° ( degrees of arc ), 1 / 24 of a circle is measured as 1 h of right ascension, or 15°; 1 / 1440 of a circle is measured as 1 m of right ascension, or 15 minutes of arc (also written as 15′); and 1 / 86400 of a circle contains 1 s of right ascension, or 15 seconds of arc (also written as 15″). A full circle, measured in right-ascension units, contains 24 × 60 × 60 = 86 400 s , or 24 × 60 = 1 440 m , or 24 h . [ 5 ]
Because right ascensions are measured in hours (of rotation of the Earth ), they can be used
to time the positions of objects in the sky. For example, if a star with RA = 1 h 30 m 00 s is at its meridian, then a star with RA = 20 h 00 m 00 s will be on the/at its meridian (at its apparent highest point) 18.5 sidereal hours later.
Sidereal hour angle, used in celestial navigation , is similar to right ascension but increases westward rather than eastward. Usually measured in degrees (°), it is the complement of right ascension with respect to 24 h . [ 6 ] It is important not to confuse sidereal hour angle with the astronomical concept of hour angle , which measures the angular distance of an object westward from the local meridian .
The Earth's axis traces a small circle (relative to its celestial equator) slowly westward about the celestial poles , completing one cycle in about 26,000 years. This movement, known as precession , causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates (including right ascension) are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch . Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch. [ 7 ] Right ascension for "fixed stars" on the equator increases by about 3.1 seconds per year or 5.1 minutes per century, but for fixed stars away from the equator the rate of change can be anything from negative infinity to positive infinity. (To this must be added the proper motion of a star.) Over a precession cycle of 26,000 years, "fixed stars" that are far from the ecliptic poles increase in right ascension by 24h, or about 5.6' per century, whereas stars within 23.5° of an ecliptic pole undergo a net change of 0h. The right ascension of Polaris is increasing quickly—in AD 2000 it was 2.5h, but when it gets closest to the north celestial pole in 2100 its right ascension will be 6h. The North Ecliptic Pole in Draco and the South Ecliptic Pole in Dorado are always at right ascension 18 h and 6 h respectively.
The currently used standard epoch is J2000.0 , which is January 1, 2000 at 12:00 TT . The prefix "J" indicates that it is a Julian epoch . Prior to J2000.0, astronomers used the successive Besselian epochs B1875.0, B1900.0, and B1950.0. [ 8 ]
The concept of right ascension has been known at least as far back as Hipparchus who measured stars in equatorial coordinates in the 2nd century BC. But Hipparchus and his successors made their star catalogs in ecliptic coordinates , and the use of RA was limited to special cases.
With the invention of the telescope , it became possible for astronomers to observe celestial objects in greater detail, provided that the telescope could be kept pointed at the object for a period of time. The easiest way to do that is to use an equatorial mount , which allows the telescope to be aligned with one of its two pivots parallel to the Earth's axis. A motorized clock drive often is used with an equatorial mount to cancel out the Earth's rotation . As the equatorial mount became widely adopted for observation, the equatorial coordinate system, which includes right ascension, was adopted at the same time for simplicity. Equatorial mounts could then be accurately pointed at objects with known right ascension and declination by the use of setting circles . The first star catalog to use right ascension and declination was John Flamsteed 's Historia Coelestis Britannica (1712, 1725). | https://en.wikipedia.org/wiki/Right_ascension |
A right circular cylinder is a cylinder whose generatrices are perpendicular to the bases. Thus, in a right circular cylinder, the generatrix and the height have the same measurements. [ 1 ] It is also less often called a cylinder of revolution, because it can be obtained by rotating a rectangle of sides r {\displaystyle r} and g {\displaystyle g} around one of its sides. Fixing g {\displaystyle g} as the side on which the revolution takes place, we obtain that the side r {\displaystyle r} , perpendicular to g {\displaystyle g} , will be the measure of the radius of the cylinder. [ 2 ]
In addition to the right circular cylinder, within the study of spatial geometry there is also the oblique circular cylinder , characterized by not having the geratrices perpendicular to the bases. [ 3 ]
Bases : the two parallel and congruent circles of the bases; [ 4 ]
Axis : the line determined by the two points of the centers of the cylinder's bases; [ 1 ]
Height : the distance between the two planes of the cylinder's bases; [ 2 ]
Generatrices : the line segments parallel to the axis and that have ends at the points of the bases' circles . [ 2 ]
The lateral surface of a right cylinder is the meeting of the generatrices. [ 3 ] It can be obtained by the product between the length of the circumference of the base and the height of the cylinder. Therefore, the lateral surface area is given by:
Where:
Note that in the case of the right circular cylinder, the height and the generatrix have the same measure, so the lateral area can also be given by:
The area of the base of a cylinder is the area of a circle (in this case we define that the circle has a radius with measure r {\displaystyle r} ):
To calculate the total area of a right circular cylinder, you simply add the lateral area to the area of the two bases:
Replacing L = 2 π r h {\displaystyle L=2\pi rh} and B = π r 2 {\displaystyle B=\pi r^{2}} , we have:
or even
Through Cavalieri's principle , which defines that if two solids of the same height, with congruent base areas, are positioned on the same plane, such that any other plane parallel to this plane sections both solids, determining from this section two polygons with the same area, [ 6 ] then the volume of the two solids will be the same, we can determine the volume of the cylinder.
This is because the volume of a cylinder can be obtained in the same way as the volume of a prism with the same height and the same area of the base. Therefore, simply multiply the area of the base by the height:
Since the area of a circle of radius r {\displaystyle r\,} , which is the base of the cylinder, is given by B = π r 2 {\displaystyle B=\pi r^{2}} it follows that:
or even
The equilateral cylinder is characterized by being a right circular cylinder in which the diameter of the base is equal to the value of the height (geratrix). [ 4 ]
Then, assuming that the radius of the base of an equilateral cylinder is r {\displaystyle r\,} then the diameter of the base of this cylinder is 2 r {\displaystyle 2r\,} and its height is 2 r {\displaystyle 2r\,} . [ 4 ]
Its lateral area can be obtained by replacing the height value by 2 r {\displaystyle 2r} :
The result can be obtained in a similar way for the total area:
For the equilateral cylinder it is possible to obtain a simpler formula to calculate the volume. Simply substitute the radius and height measurements defined earlier into the volume formula for a straight circular cylinder:
It is the intersection between a plane containing the axis of the cylinder and the cylinder. [ 4 ]
In the case of the right circular cylinder, the meridian section is a rectangle, because the generatrix is perpendicular to the base. The equilateral cylinder, on the other hand, has a square meridian section because its height is congruent to the diameter of the base. [ 1 ] [ 4 ] | https://en.wikipedia.org/wiki/Right_circular_cylinder |
In geometry , a right conoid is a ruled surface generated by a family of straight lines that all intersect perpendicularly to a fixed straight line, called the axis of the right conoid .
Using a Cartesian coordinate system in three-dimensional space , if we take the z -axis to be the axis of a right conoid, then the right conoid can be represented by the parametric equations :
where h ( u ) is some function for representing the height of the moving line.
A typical example of right conoids is given by the parametric equations
The image on the right shows how the coplanar lines generate the right conoid.
Other right conoids include:
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Right_conoid |
The right of reply or right of correction generally means the right to defend oneself against public criticism in the same venue where it was published. In some countries, such as Brazil , [ 1 ] it is a legal right or even a constitutional right . In other countries, it is not a legal right as such, but a right which certain media outlets and publications choose to grant to people who have been severely criticised by them, as a matter of editorial policy.
The Brazilian Constitution guarantees the right of reply ( direito de resposta ). [ 2 ]
In 2020, a judge ordered the Brazilian government to post a letter from an Indigenous group on official government websites for 30 days. [ 3 ] The Waimiri-Atroari felt that the then-president of Brazil, Jair Bolsonaro , was using racist rhetoric against them in a dispute over building a transmission line through the Waimiri-Atroari Indigenous Reserve in the Amazon rainforest , and they exercised their right of reply this way. [ 3 ]
In Europe, there have been proposals for a legally enforceable right of reply that applies to all media , including newspapers , magazines , and other print media, along with radio , television , and the internet . In 1974, the Committee of Ministers of the Council of Europe already voted a resolution granting a right of reply to all individuals. [ 4 ] Article 1 of a 2004 Council of Europe recommendation defined a right of reply as: "offering a possibility to react to any information in the media presenting inaccurate facts ... which affect ... personal rights." [ 5 ] [ 6 ]
In the federal system of Germany , the individual federal states are responsible for education, cultural affairs, and also the press and electronic media. All press laws of the 16 federal states guarantee the right to a counter presentation of factual statements which are deemed to be wrong by the individuals and organisations concerned. This is based on article 11 the national press law of 1874, and is found in all 16 laws as §11 or §10 in slightly modified versions.
Austria and Switzerland have similar laws on the books. In Austria, this is in article 9 of the national media law, in Switzerland in article 28g of the civil code.
In France , the right to a corrective reply goes back to article 13 of the Law on the freedom of the press of July 29, 1881 and is renewed and extended to broadcast and digital media via various laws and decrees.
The Belgian law on the right to reply emerged in 1831 as article 13 of the 1831 decree on the press. This was replaced 130 years later by the law on the droit de réponse or «loi du 23 juin 1961». Originally referring only to the printed press, this law was amended in 1977 by the law of «4 mars 1977 relative au droit de réponse dans l’audiovisuel» i.e. audiovisual media, published in the Moniteur Belge of March 15, 1977. [ 7 ] [ 8 ] Since the federalisation of the Belgian state in 1980, the language communities are responsible for the media, and so the Flemish community has passed in 2005 a decrée dated March 4, 2005, which regulates the right to reply in articles 177 to 199, and the German language community has passed the decree of 27 June 2005, which simply refers to the law of 1961 as amended in 1977. [ 7 ] : p.32
The United Nations recognises the "International Right of Correction" through the "Convention on the International Right of Correction", which entered into force on August 24, 1962. [ 9 ]
A Florida right of reply law (referring to print media) was overturned by Miami Herald Publishing Co. v. Tornillo , 418 U.S. 241 (1974), while a FCC policy (referring to broadcast media) was affirmed in Red Lion Broadcasting Co. v. FCC , 395 U.S. 367 (1969). The policy was subsequently completely abandoned in 1987. [ 10 ]
A right of reply can also be part of the editorial policy of a news publication or an academic journal. The BBC 's Editorial Guidelines state: [ 11 ]
When our output makes allegations of wrongdoing, iniquity or incompetence or lays out a strong and damaging critique of an individual or institution the presumption is that those criticised should be given a "right of reply", that is, given a fair opportunity to respond to the allegations.
The Australasian Journal of Philosophy 's editorial policy says: [ 12 ]
[A]uthors of the materials being commented on [in Discussion Notes] may be given a right of reply (subject to the usual refereeing), on the understanding that timely publication of the Note will take priority over the desirability of including both Note and Reply in the same issue of the Journal.
In the U.S., there is a journalistic standard of including denials, exemplified by the ethical code of the Society of Professional Journalists : "Diligently seek subjects of news coverage to allow them to respond to criticism or allegations of wrongdoing." [ 13 ] | https://en.wikipedia.org/wiki/Right_of_reply |
Right to know is a human right enshrined in law in several countries. UNESCO defines it as the right for people to "participate in an informed way in decisions that affect them, while also holding governments and others accountable". [ 1 ] It pursues universal access to information as essential foundation of inclusive knowledge societies . [ 2 ] It is often defined in the context of the right for people to know about their potential exposure to environmental conditions or substances that may cause illness or injury, but it can also refer more generally to freedom of information or informed consent .
Right to know regarding environmental hazard information is protected by Australian law, which is described at Department of Sustainability, Environment, Water, Population and Communities . [ 3 ]
Right to know regarding workplace hazard information is protected by Australian law, which is described at Safe Work Australia and at the Hazardous Substances Information System. [ 4 ] [ 5 ]
Right to know regarding workplace hazard information is protected by Canadian law. [ 6 ]
Right to know regarding environmental hazard information is protected by Canadian law, which is described at Environment Canada . [ 7 ]
Europe consists of many countries, each of which has its own laws. The European Commission provides central access to most of the information about individual regulatory agencies and laws.
Right to know about environmental hazards is managed by the European Commission's Directorate-General for the Environment and by the European Environment Agency . [ 8 ] [ 9 ]
Right to know about workplace hazards is managed by the European Agency for Safety and Health at Work . [ 10 ]
In the context of the United States workplace and community environmental law , right to know is the legal principle that the individual has the right to know the chemicals to which they may be exposed in their daily living. It is embodied in United States federal law as well as in local laws in several U.S. states . "Right to Know" laws take two forms: Community Right to Know and Workplace Right to Know. Each grants certain rights to those groups. The "right to know" concept is included in Rachel Carson 's book Silent Spring . [ 11 ]
Toxic substances used in the work area must be disclosed to the occupants under laws managed by Occupational Safety and Health Administration . [ 12 ] [ 13 ] [ 14 ]
Hazardous substances used outside buildings must be disclosed to the appropriate state or local agency responsible for state environmental protection, [ 15 ] including regulatory actions outside federal land. Use on federal land is managed by the United States Environmental Protection Agency and the Bureau of Land Management Archived 2011-10-18 at the Wayback Machine . [ 16 ]
The US Department of Defense is self-regulating, and as such, is immune to state and federal law pertaining to Occupational Safety and Health Administration OSHA and Environmental Protection Agency (EPA) regulations on foreign and domestic soil. [ citation needed ]
Occupational Health and Safety is managed within most states under federal authority. [ 17 ]
Workplace safety and health in the U.S. operates under the framework established by the federal Occupational Safety and Health Act of 1970 (OSH Act). [ citation needed ]
Occupational Safety and Health Administration (OSHA) within the U.S. Department of Labor is responsible for issuing and enforcing regulations covering workplace safety. [ citation needed ]
The Department of Transportation is responsible for transportation safety and for maintaining the list of hazardous materials. [ citation needed ]
The Environmental Protection Agency is responsible for maintaining lists of specific hazardous materials. [ citation needed ]
Environmental health and safety outside the workplace is established by the Emergency Planning and Community Right-to-Know Act (EPCRA), which is managed by the Environmental Protection Agency (EPA) and various state and local government agencies. [ 18 ]
State and local agencies maintain epidemiology information required by physicians to evaluate environmental illness. [ citation needed ]
Air quality information must be provided by pest control supe [ citation needed ] rvisors under license requirements established by the Worker Protection Standard when restricted use pesticide is applied.
The list of restricted use pesticides is maintained by the US EPA. [ 19 ]
Additionally, specific environmental pollutants are identified in public law, which extends to all hazardous substances even if the item is not identified as a restricted use pesticide by the EPA. As an example, cyfluthrin , cypermethrin , and cynoff produce hydrogen cyanide upon combustion, but some pesticides that inadvertently produce noxious chemicals may not be identified as restricted-use pesticides. [ 20 ]
Some specific chemicals, such as cyaniate , cyanide , cyano , and nitrile compounds, satisfy the specific hazard definition that is identified in public law regardless of whether or not the item is identified on the list of restricted use pesticides maintained by the United States Environmental Protection Agency. Title 42 U.S.C. Section 7413 contains the reporting requirement for environmental pollutants.] [ 22 ]
Environmental illness share characteristics with common diseases. For example, cyanide exposure symptoms include weakness, headache, nausea, confusion, dizziness, seizures, cardiac arrest, and unconsciousness. [ 23 ] [ 24 ] Influenza and heart disease include the same symptoms.
Failure to obtain proper disclosure that is required by physicians will result in improper, ineffective, or delayed medical diagnosis and treatment for environmental illness caused by exposure to hazardous substance and by exposure to radiation. [ citation needed ]
The Library Pipeline and Hazardous Material Safety Administration within the US Department of Transportation is responsible for maintaining the list of hazardous materials within the United States. [ 25 ]
All hazardous materials that are not created at the work site must be transported by motor vehicle. The safety and security of the public transportation system is enforced by the Department of Transportation. [ 26 ]
The Department of Transportation also regulates mandatory labeling requirements for all hazardous materials . [ 27 ] This is in addition to requirements by other federal agencies, like the United States Environmental Protection Agency , and Occupational Safety and Health Administration .
DOT is responsible for enforcement actions and public notification regarding hazardous chemical releases and exposures, including incidents involving federal workers. [ 28 ]
DOT requires that all buildings and vehicles containing hazardous materials must have signs that disclose specific types of hazards for certified first responder . [ citation needed ]
Safety of certain workers is governed by the US Department of Energy , such as mine workers. Public information can be obtained in the form of directives. [ 29 ]
The United States Department of Defense manages environmental safety independent of OSHA and EPA. Spills, mishaps, illnesses, and injuries are not normally handled in accordance with local, state, and federal law.
Failure to administer discipline for illegal activity occurring within a military command is considered to be dereliction of duty , which is administered under the Uniform Code of Military Justice . [ citation needed ]
Individuals with information about environmental crimes and injuries involving the military are protected by Whistleblower protection in United States . Government employees, government contractors, and military officers often lack the training, education, licensing, and experience required to understand the legal requirements involving environmental safety. The sophistication required to understand legal requirements is not normally required for promotion and contractor selection within the military. [ citation needed ] Because of this, specific rules are documented in orders and directives that need to be written in plain language intended to be understood by people that have a 4th-grade reading ability. [ citation needed ]
Laws are enforced by the commanding officer in military organizations. The commanding officer typically has the ability to read and understand written requirements. A Flag Officer is subject to Court-martial action if laws or government policies are violated under their command when the activity is outside the scope of mission orders and rules of engagement . Each commanding officer is responsible for writing and maintaining policies simple enough to be understood by everyone in their command. Each commanding officer is responsible for ensuring that command policy documents are made available to every person in their command (civilian, military, and contractor). The commanding officer is responsible for disciplinary action and public disclosures when policies are violated within their command. [ citation needed ]
The commanding officer shares responsibilities for crimes that are not punished (dereliction). [ citation needed ]
Military agencies operate independently of law enforcement, judicial authority, and common law . Similar exemptions exist for some state agencies. [ citation needed ]
Potential crimes are investigated by military police . The following is an example of the kinds of policy documents used to conduct criminal investigations. [ 30 ]
Because military law enforcement is performed with no independent civilian oversight, there is an inherent conflict of interest . Information and disclosures are obtained through Freedom of Information Act request and not through disclosures ordinarily associated with the EPA and OSHA that have the competency required for training, certification, disclosure, and enforcement. This prevents physicians from obtaining the kind of information needed to diagnose and treat environmental illness, so the root cause for environmental illness typically remains permanently unknown. The following organization may help when the root cause for an illness remains unknown longer than 30 days. [ 31 ]
Criminal violations, injuries, and potential enforcement actions begin by exchanging information in the following venues when civilian government employees and flag officers are unable to deal with the situation in an ethical manner.
US federal laws, state laws, local laws, foreign laws, and treaty agreements may not apply.
Policies are established by Executive Order and not public law, except for interventions by the United States Congress and interventions by US district courts. [ 34 ]
The following US presidential executive orders establish the requirements for DoD environmental policy for government organizations within the executive branch of the United States.
The following unclassified documents provide further information for programs managed by the United States Secretary of Defense .
The information described in this section is for the United States, but most countries have similar regulatory requirements.
Two mandatory documents must provide hazard information for most toxic products.
Product label requirements are established by the Federal Insecticide, Fungicide, and Rodenticide Act under the authority of the United States Environmental Protection Agency . As a minimum this requires information about the chemical makeup of the product, instructions required for the safe use of the product, and contact information for the manufacturer of the product.
A Safety Data Sheet is required under the authority of the United States Occupational Safety and Health Administration for hazardous materials to communicate health and safety risks needed by health care professionals and emergency responders.
A summary of workers rights is available from OSHA . [ 61 ]
Chemical information is most frequently associated with the right to know but there are many other types of information that are important to workplace safety and health. The following sources of information are those most likely to be found at the workplace or in state or federal agencies with jurisdiction over the workplace:
Note:Refer to 29 CFR 1910.1200 for the most current and updated information. [ 66 ]
The Hazard Communication Standard [ 67 ] first went into effect in 1985 and has since been expanded to cover almost all workplaces under OSHA jurisdiction. The details of the Hazard Communication standard are rather complicated, but the basic idea behind it is straightforward. It requires chemical manufacturers and employers to communicate information to workers about the hazards of workplace chemicals or products, including training.
The Hazard Communication standard does not specify how much training a worker must receive. Instead, it defines what the training must cover. Employers must conduct training in a language comprehensible to employees to be in compliance with the standard. It also states that workers must be trained at the time of initial assignment and whenever a new hazard is introduced into their work area. The purpose for this is so that workers can understand the hazards they face and so that they are aware of the protective measures that should be in place.
It is very difficult to get a good understanding of chemical hazards and particularly to be able to read SDSs in the short amount of time that many companies devote to hazard communication training. When OSHA conducts an inspection, the inspector will evaluate the effectiveness of the training by reviewing records of what training was done and by interviewing employees who use chemicals to find out what they understand about the hazards. [ 68 ]
The United States Department of Transportation (DOT) regulates hazmat transportation within the territory of the US by Title 49 of the Code of Federal Regulations . [ 69 ]
All chemical manufacturers and importers must assess the hazards of the chemicals they produce and import and pass this information on to transportation workers and purchasers through labels and Safety Data Sheets (SDSs). Employers whose employees may be exposed to hazardous chemicals on the job must provide hazardous chemical information to those employees through the use of SDSs, properly labeled containers, training, and a written hazard communication program. This standard also requires the employer to maintain a list of all hazardous chemicals used in the workplace. The SDSs for these chemicals must be kept current and they must be made available and accessible to employees in their work areas.
Chemicals that may pose health risks or those that are physical hazards (such as fire or explosion) are covered. List of chemicals that are considered hazardous are maintained according to the use or purpose. There are several existing sources that manufacturers and employers may consult. These include:
Ultimately, it is up to the manufacturer to disclose hazards.
There are other sources of information about chemicals used in industry as a result of state and federal laws regarding the Community Right to Know Act.
The Air Resources Board is responsible for public hazard disclosures in California . [ 70 ] Pesticide use disclosures are made by each pest control supervisor to the County Agricultural Commission. [ 71 ] Epidemiology information is available from the California Pesticide Information Portal, which can be used by health care professionals to identify the cause for environmental illness. [ 72 ]
Under the Oregon Community Right to Know Act (ORS 453.307-372) and the federal Superfund Amendments and Reauthorization Act (SARA) Title III, the Office of the State Fire Marshal collects information on hazardous substances and makes it available to emergency responders and to the general public. Among the information which companies must report are:
The information can be obtained in the form of an annual report of releases for the state or for specific companies. It is available on request from the Fire Marshal's Office and is normally free of charge unless unusually large quantities of data are involved.
Each container that contains a hazardous chemical must be labeled by the manufacturer or distributor before it is sent to downstream users. There is no single standard format for labels. Each product must be labeled according to the specific type of hazard.
Pesticide and fungicide labeling is regulated by the Environmental Protection Agency. [ 73 ]
Employers are required to inform the public of:
In addition, these items must be covered in training:
Note: Refer to 29 CFR 1910.1200 for the most current and updated information ( https://www.osha.gov/pls/oshaweb/owadisp.show_document?p_table=standards&p_id=10099 )
SDSs information is required by EPA, OSHA, DOT, and/or DOE regulations depending upon the type of hazardous substance. The Safety Data Sheet includes the following information.
Chemical manufacturers may legally withhold the specific chemical identity of a material from the SDS and label in the case of bona fide trade secrets. In such cases the following rules apply:
The Hazard Communication standard requires that chemical information must be transmitted to employees who work with hazardous materials. Employee exposure records can tell if a worker is actually being exposed to a chemical or physical hazard and how much exposure he or she is receiving. OSHA regulations that establish access rights to these records are found in 29 CFR 1910.1020: Access to Medical and Exposure Records. [ 62 ] This information is usually the product of some type of monitoring or measurement for:
Employees and their designated representatives have the right under OR-OSHA regulations to examine or copy exposure records that are in the possession of the employer. This right applies not only to records of an employee's own exposure to chemical, physical, or biological agents but also to exposure records of other employees whose working conditions are similar to the employee's. Union representatives have the right to see records for any work areas in which the union represents employees. [ citation needed ]
In addition to seeing the results, employees and their representatives also have the right to observe the actual measurement of hazardous chemical or noise exposure. [ citation needed ]
Exposure records that are part of an OR-OSHA inspection file are also accessible to employees and union representatives. In fact these files, with the exception of certain confidential information, are open to the public after the inspection has been legally closed out. [ citation needed ]
Many employers keep some type of medical records . These could be medical questionnaires, results of pre-employment physical examinations, results from blood tests or more elaborate records of ongoing diagnosis or treatment (such as all biological monitoring not defined as an employee exposure record). OSHA regulations that establish access rights to these records are found in CFR 1910.1020: Access to Medical and Exposure Records. [ 62 ]
Medical records are considerably more personal than exposure records or accident reports so the rules governing confidentiality and access to them are stricter. Employee medical records do not include a lot of employee medical information because of this extra scrutiny. A good rule of thumb is that if the information is maintained separately from the employer's medical program, it probably will not be accessible. [ citation needed ]
Examples of separately maintained medical information would be records of voluntary employee assistance programs (alcohol, drug abuse, or personal counseling programs), medical records concerning health insurance claims or records created solely in preparation for litigation. [ citation needed ]
These records are often kept at the worksite if there is an on-site physician or nurse. They could also be in the files of a physician, clinic, or hospital with whom the employer contracts for medical services. [ citation needed ]
An employee has access to his or her own medical record (29 CFR 1910.1020). An individual employee may also sign a written release authorizing a designated representative (such as a union representative) to receive access to his or her medical record. The latter might occur in a case where the union or a physician or other researcher working for the union or employer needs medical information on a whole group of workers to document a health problem. Certain confidential information may be deleted from an employee's record before it is released. [ citation needed ]
The push towards greater availability of information came from events that killed many and infected others with toxins, such as the Bhopal disaster in India in December 1984. During the Bhopal disaster, a cloud of methyl isocyanate escaped an insecticide plant due to neglect, and as a result, 2,000 people were killed and many more were injured. The plant had been already noted for its poor safety record and lack of evacuation or emergency plan. The lack of awareness and knowledge in the community about the dangers led to this disaster, which could have been avoided. [ 74 ]
Shortly after, the Emergency Planning and Right to Know Act of 1986, originally introduced by California Democrat Henry Waxman , was passed. This act was the first official step taken to help people become more educated in the field of corporation's pollutants and their actions. The act issued a requirement for industrial facilities across the U.S. to disclose information on their annual releases of toxic chemicals. This data collected is made available by the Environmental Protection Agency in the Toxics Release Inventory (TRI) which is open to public knowledge. This was noticed as a step in the right direction however, only pounds of individual pollutants were required to be released as a result of this act. No information about toxicity, spread, or overlap had been required to be shared with the public. [ citation needed ]
In years to come, the public achieved greater ways of accessing the information that corporations with excess pollutants withheld. The Toxic 100 is a form of newer information which is a list that includes one hundred companies industrial air polluters in the United States that are ranked by the quantity of pollution they produce and the toxicity of the pollutants. This data is determined by the Political Economy Research Institute (PERI) and calculated with factors such as winds carrying the pollution, height of smokestacks, and how much it impacts nearby communities. [ 75 ] | https://en.wikipedia.org/wiki/Right_to_know |
Right to repair is a legal right for owners of devices and equipment to freely modify and repair products such as automobiles, electronics, and farm equipment. Right to repair may also refer to the social movement of citizens putting pressure on their governments to enact laws protecting a right to repair. [ 1 ]
Common obstacles to repair include requirements to use only the manufacturer's maintenance services, restrictions on access to tools and components, and software barriers.
Proponents for this right point to the benefits in affordability, sustainability, and availability of critical supplies in times of crisis.
While initially driven majorly by automotive consumers protection agencies and the automotive after sales service industry, the discussion of establishing a right to repair not only for vehicles but for any kind of electronic product gained traction as consumer electronics such as smartphones and computers became universally available, causing broken and used electronics to become the fastest growing waste stream. [ 2 ] [ 3 ] Today it's estimated that more than half of the population of the western world has one or more used or broken electronic devices at home that are not introduced back into the market due to a lack of affordable repair. [ 4 ]
In addition to the consumer goods, healthcare equipment repair access made news at the start of the COVID-19 pandemic in 2020 , when hospitals had trouble getting maintenance for some critical high-demand medical equipment, most notably ventilators. [ 5 ] [ 6 ] [ 7 ]
The pandemic has also been credited with helping to grow the right-to-repair movement since many repair shops were closed. [ 8 ] The Economist also cites the expectation that owners of products should be able to repair them as a sense of moral justice or property rights . [ 9 ] Those fighting against planned obsolescence have also taken note of when repair costs exceeds replacement costs because the companies that created the product have retained a monopoly on its repair, driving up prices. [ 8 ]
Right to repair refers to the concept that end users of technical, electronic or automotive devices should be allowed to freely repair these products. Some notable aspects of a product include: [ 10 ]
Some goals of the right to repair are to favor repair instead of replacement, and make such repairs more affordable leading to a more sustainable economy and reduction in electronic waste. [ 11 ] [ 2 ] [ 12 ]
The use of glue or proprietary screws can make repairs more difficult. [ 11 ] In general, proprietary parts and accessories can make products more difficult to repair, such as Apple 's "Lightning" charging ports and adapters, which require a non-standard process to repair, leading the European Union to standardize charging ports for small devices, requiring all devices to use USB-C . [ 13 ]
Parts and tools needed to make repairs, should be available to everyone, including consumers. [ 11 ]
Parts pairing prevents parts being swapped without a password that they provide to preferred technicians. [ 13 ] New ways to lock devices like part pairing (components of a device are serialized and can not be swapped against others) became increasingly popular among manufacturers, including digital rights management . [ 14 ] Using approved parts can increase the cost of the repair, leading many consumers to speed up their upgrade cycle to a new device. [ 15 ]
In addition to access to software updates, the ability to install third-party software is also mentioned as a major goal, which would, for example, allow some devices to be adapted over time. [ 11 ]
Manuals and design schematics should be freely available and help consumers know how to repair their devices. [ 11 ] [ 16 ]
The strategy to continuously change products to create continuous demand for the latest generation was pursued at a large scale by General Motors executive Alfred P. Sloan . [ 18 ] [ 19 ] GM overtook Ford as the biggest American automaker and planned obsolescence with annual variants of a product became widely adopted across industries in the American economy, eventually becoming adopted by Ford by 1933. [ 20 ] [ 21 ]
The car industry was at the forefront of establishing the concept of certified repair: starting from the 1910s, Ford established certified dealerships and service networks to promote parts made by Ford instead of independent repair shops and often after-sales parts. Ford also pushed for standardized pricing among certified repair shops, making flat fees mandatory even for different repairs. The combination of annual updates to cars and components made it more difficult for independent repair shops to maintain a stock of parts. [ 22 ] [ 21 ]
A couple of court cases have required products with repaired or refurbished components to be labeled as "used." [ 23 ] [ 24 ]
In 1947, a business owner was refurbishing old spark plugs and reselling them. However, he was reselling them under a trademarked name. This led to a lawsuit that provided the framework for legislation that would provide a right to resell repaired or refurbished items, as long as they were labelled correctly.
Champion Spark Plug Co. v. Sanders provided the basis of FTC guidelines which provides an uninfringeable right to resell repaired or refurbished items as long as they were labeled as such. The decision also provided the framework for trademark guidelines regarding the resale of used goods under a trademarked namesake.
FTC guidelines Title 16, Chapter I, Subchapter B, Part 20 provides guidance and regulations on the labeling of items that have been “rebuilt”, “refurbished”, or “re-manufactured” in order to prevent unfair competitive advantage in selling components in the automobile industry. This guideline hence allowed businesses the ability to repair items, for resale later.
Some manufacturers shifted towards more repairable designs. Apple , which rose quickly to become one of the largest computer manufacturers, sold the first computers with circuit board descriptions, easy-to-swap components, and clear repair instructions. [ 25 ]
Copyright with regard to computer software source code also became a front on the limitation of repairability. In the U.S., the Digital Millennium Copyright Act of 1998 prohibits repairs unless granted an exception, and has been used to block repairs as software became more common in a range of devices and appliances. [ 26 ] [ 27 ]
To prevent refilling of empty ink cartridges , manufacturers had started placing microchips counting fill levels and usage, rendering refills difficult or impossible. Reselling and refurbishing products was confirmed to be legal by the Supreme Court in 2017 in Impression Prods., Inc. v. Lexmark Int'l, Inc. . [ 28 ] As of 2022, complaints about the longevity and repairability of printers remains. [ 29 ]
In the early 2000s, the automotive industry defeated the first proposal of a right to repair bill for the automotive sector . [ 30 ] While the National Automotive Service Task Force (NASTF), an organization supported by the automotive industry, established an online directory for accessing manufacturer information and tools in 2001, [ 31 ] a study conducted by the Terrance Group found that around 59% of independent repair services continued to struggle to get access to diagnostic tools and parts from manufacturers. [ 32 ] [ non-primary source needed ] The share of electronic components in the total bill of materials for a car also rose from 5% in the 1970s to over 22% in 2000. [ 33 ] The increasing hybridization of cars brought the need of special tools that a manufacturer only shared with authorized repair services. [ 34 ]
A trend towards right to repair in automotive and other industries gained traction with more proposed laws and court decisions. [ 30 ] While initially driven by automotive consumers protection agencies and the automotive after-sales service industry, the discussion of establishing a right to repair for any kind of industrially produced device gained traction as consumer electronics such as smartphones and computers became widely used, alongside advanced computerized integration in farming equipment. The movement was also backed by climate change activists aiming to reduce e-waste. [ 35 ]
The first successful implementation of a right to repair came when Massachusetts passed the United States' first right to repair law for the automotive sector in 2012, which required automobile manufacturers to sell the same service materials and diagnostics directly to consumers or to independent mechanics as they used to provide exclusively to their dealerships. As a result, major automobile trade organizations signed a Memorandum of Understanding in January 2014 using the Massachusetts law as the basis of their agreement for all 50 states starting in the 2018 automotive year. [ 36 ]
Companies like Apple, John Deere, and AT&T have lobbied against Right to Repair bills, and created a number of "strange bedfellows" from high tech and agricultural sectors on both sides of the issue, according to Time . [ 47 ] The tech industry has lobbied in opposition through groups like TechNet, [ 48 ] the Entertainment Software Alliance ("ESA"). [ 49 ] The Association of Equipment Manufacturers ("AEM") and their dealership counterparts the Equipment Dealers Association's 2018 Statement of Principles became the subject of media backlash when in January 2021 the promised means to make complete repairs had not been visibly available. [ 50 ]
In late 2017, users of older iPhone models discovered evidence that recent updates to the phone's operating system, iOS , were throttling the phone's performance. This led to accusations that Apple sabotaged the performance of older iPhones to compel customers to buy new models more frequently. [ 51 ] [ 52 ] Apple disputed this assumed intention, stating instead that the goal of the software was to prevent overtaxing older lithium-ion batteries , which have degraded over time, to avoid unexpected shutdowns of the phone. [ 53 ] Furthermore, Apple allowed users to disable the feature in an iOS update but advised against it. [ 54 ] Additionally, Apple allowed users of affected iPhones to obtain service to replace batteries in their phones for a reduced cost of service ( US$29 compared to US$79 ) for the next six months. [ 55 ] However, the "right to repair" movement argued that the best outcome would be Apple allowing consumers to purchase third-party batteries and possess the instructions to replace it at a lower cost. [ 56 ]
In April 2018, the Federal Trade Commission sent notice to six automobile, consumer electronics, and video game console manufacturers, later revealed through a Freedom of Information Act request to be Hyundai , Asus , HTC , Microsoft , Sony , and Nintendo , stating that their warranty practices may violate the Magnuson-Moss Warranty Act . [ 57 ] The FTC specifically identified that informing consumers that warranties are voided if they break a warranty sticker or seal on the unit's packaging, use third-party replacement parts, or use third-party repair services is a deceptive practice, as these terms are only valid if the manufacturer provides free warranty service or replacement parts. [ 58 ] Both Sony and Nintendo released updated warranty statements following this notice. [ 59 ]
In April 2018, US Public Interest Research Group issued a statement defending Eric Lundgren over his sentencing for creating the ‘restore disks’ to extend the life of computers. [ 60 ] [ additional citation(s) needed ]
In 2018, the exemption for making software modifications to "land-based motor vehicles" was expanded to allow equipment owners to engage the services of third parties to assist with making changes. These changes were endorsed by the American Farm Bureau Federation . [ 61 ] [ 62 ] [ 63 ] In its 2021 recommendations, the Library of Congress further extend the exemption, with favorable right-to-repair considerations for automobiles, boats, agricultural vehicles, and medical equipment, as well as modifying prior rules related to other consumer goods. [ 64 ]
Senator Elizabeth Warren , as part of her campaign for president, laid out plans for legislation related to agriculture in March 2019, stated her intent to introduce legislation to affirm the right to repair farm equipment, potentially expanding this to other electronic devices. [ 65 ] [ additional citation(s) needed ]
In August 2019, Apple announced a program where independent repair shops may have the ability to buy official replacement parts for Apple products. Several operators became Authorized under their "IRP" program but many smaller repair operators avoided the option due to legally onerous burdens. [ 66 ] [ additional citation(s) needed ]
In the 2010s the trend of making one's repairs to devices spread from the east into the Western Europe. [ 67 ] In July 2017, the European Parliament approved recommendations that member states should pass laws that give consumers the right to repair their electronics, as part of a larger update to its previous Ecodesign Directive from 2009 which called for manufacturers to produce more energy-efficient and cleaner consumer devices. The ability to repair devices is seen by these recommendations as a means to reduce waste to the environment. [ 68 ] With these recommendations, work began on establishing the legal Directive for the EU to support the recommendations, and from which member states would then pass laws to meet the Directive. One of the first areas of focus was consumer appliances such as refrigerators and washing machines. Some were assembled using adhesives instead of mechanical fasteners which made it impossible for consumers or repair technicians from making non-destructive repairs. The right-to-repair facets of appliances were a point of contention and lobbying between European consumer groups and appliance manufacturers. [ 67 ] Ultimately, the EU passed legislation in October 2019 that required manufacturers of appliances to be able to supply replacement parts to professional repairmen for ten years from manufacture. The legislation did not address other facets related to right-to-repair, and activists noted that this still limited the consumer's ability to perform their own repairs. [ 69 ] Sweden also offers tax breaks for people who repair their own goods. [ 70 ]
The EU also has directives toward a circular economy which are aimed toward reducing greenhouse gas emissions and other excessive wastes through recycling and other programs. A 2020 "Circular Economy Action Plan" draft included the electronics right to repair for EU citizens to allow device owners to replace only malfunctioning parts rather than replace the entire device, reducing electronics waste. The Action Plan included additional standardization that would aid toward rights to repair, such as common power ports on mobile devices. [ 71 ]
In the midst of the COVID-19 pandemic , where medical equipment became critical for many hospitals, iFixit and a team of volunteers worked to publish and make accessible the largest known collection of manuals and service guides for medical equipment, using information crowdsourced from hospitals, medical institutions and sites like Frank's Hospital Workshop. iFixit had found, like with consumer electronics, some of the more expensive medical equipment had used means to make non-routine servicing difficult for end-users and requiring authorized repair processes. [ 72 ] [ 73 ]
2020 Massachusetts Question 1 passed to update the previous measure on automobile repair to include electronic vehicle data. [ 74 ] Before it could come into effect, in June 2023, the federal National Highway Traffic Safety Administration instructed manufacturers to ignore the 2020 Massachusetts law, asserting it was preempted by federal law because opening telematics to other organizations could make cars more vulnerable to computer hackers. (Both claims are disputed by Massachusetts in the lawsuit.) [ 75 ]
In May 2021, the Federal Trade Commission (FTC) issued a report "Nixing the Fix" to Congress that outlined issues around corporations' policies that limit repairs on consumer goods that it considered in violation of trade laws, and outlined steps that could be done to better enforce this. This included self-regulation by the industries involved, as well as expansion of existing laws such as the Magnuson-Moss Warranty Act or new laws to give the FTC better enforcement to protect consumers from overzealous repair restrictions. [ 76 ]
In July 2021, the Biden administration issued an executive order to the FTC [ 77 ] and the Department of Agriculture [ 78 ] to widely improve access to repair for both consumers and farmers. The executive order to the FTC included instructions to craft rules to prevent manufacturers from preventing repairs performed by owners or independent repair shops. [ 79 ] [ 80 ] About two weeks later, the FTC voted unanimously to enforce the right to repair as policy and to look to take action against companies that limit the type of repair work that can be done at independent repair shops. [ 81 ]
Apple announced in November 2021 that it would be allowing consumers to order parts and make repairs on Apple products, initially with iPhone 12 and 13 devices but eventually rolling out to include Mac computers. [ 82 ] [ additional citation(s) needed ] Reception to the program has been mixed, with Right to Repair advocate Louis Rossmann seeing the program as a step in the right direction, but criticized the omission of certain parts, and the need to input a serial number before ordering parts. [ 83 ] [ additional citation(s) needed ]
In 2021, France created a repairability scoring system that took inspiration from iFixit's scorecard. France expressed its intent to merge it into a 'Durability index' that also considers how long items are expected to last. [ 39 ]
In 2022, Apple started enabling customers to repair batteries and screens. [ citation needed ] Additionally, Apple has prevented companies from repairing or refurbishing Apple's products without their permission. These action have irritated consumers who believe Apple is against the right to repair. [ 84 ]
In 2022, Framework Computer , Adafruit , Raspberry Pi , among other computer systems, started sharing 3D-printable models for replacement parts. [ 85 ]
On December 28, 2022, New York Governor Kathy Hochul signed into law the Digital Fair Repair Act , nearly seven months after it had passed the state senate . The law established the right of consumers and independent repairers to get manuals, diagrams, and original parts from manufacturers, although The Verge , Engadget , and Ars Technica noted that the bill was made less vigorous by way of last-minute changes that provided exceptions to original equipment manufacturers. It will apply to electronic devices sold in the state in 2023. [ 86 ] [ 87 ] [ 88 ]
John Deere announced in January 2023 that it was signing a memorandum of understanding with the American Farm Bureau Federation agreeing that American farmers had the right to repair their own equipment or have it serviced at independent repair shops in the United States. Consumers and independent repair centers would still be bound against divulging certain trade secrets, and cannot tamper or override emission control settings, but are otherwise free to repair as they see fit. [ 89 ]
In 2023, three business professors cautioned that right-to-repair laws by themselves, could have unintended consequences including incentivizing companies to create cheaper products that are lower-cost and less repairable or durable, or raise the initial sale price of the item. [ 90 ] [ 91 ]
The U.S. Copyright Office, as part of the tri-annual review of exemptions for the Digital Millennium Copyright Act , approved an exemption for bypassing technical controls for retail-level commercial food preparation equipment for the purposes of repair and maintenance. Notoriously, the inability for third-party repairs of such equipment had been the cause of numerous McDonald's ice cream machines being out-of-service, as the manufacturer, Taylor Company , had only allowed themselves to repair these machines. [ 92 ]
Adopted on May 30, 2024, the European Union's Right to Repair Directive (R2RD) requires manufacturers to offer repair services that are both efficient and affordable, while also making sure consumers are aware of their repair rights. [ 93 ] Previously, the right to repair in the EU was regulated by the Sale of Goods Directive and the different product-specific Commission Regulations provided under the Ecodesign Directive . | https://en.wikipedia.org/wiki/Right_to_repair |
A right triangle or right-angled triangle , sometimes called an orthogonal triangle or rectangular triangle , is a triangle in which two sides are perpendicular , forming a right angle ( 1 ⁄ 4 turn or 90 degrees ).
The side opposite to the right angle is called the hypotenuse (side c {\displaystyle c} in the figure). The sides adjacent to the right angle are called legs (or catheti , singular: cathetus ). Side a {\displaystyle a} may be identified as the side adjacent to angle B {\displaystyle B} and opposite (or opposed to ) angle A , {\displaystyle A,} while side b {\displaystyle b} is the side adjacent to angle A {\displaystyle A} and opposite angle B . {\displaystyle B.}
Every right triangle is half of a rectangle which has been divided along its diagonal . When the rectangle is a square , its right-triangular half is isosceles , with two congruent sides and two congruent angles. When the rectangle is not a square, its right-triangular half is scalene .
Every triangle whose base is the diameter of a circle and whose apex lies on the circle is a right triangle, with the right angle at the apex and the hypotenuse as the base; conversely, the circumcircle of any right triangle has the hypotenuse as its diameter. This is Thales' theorem .
The legs and hypotenuse of a right triangle satisfy the Pythagorean theorem : the sum of the areas of the squares on two legs is the area of the square on the hypotenuse, a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} If the lengths of all three sides of a right triangle are integers, the triangle is called a Pythagorean triangle and its side lengths are collectively known as a Pythagorean triple .
The relations between the sides and angles of a right triangle provides one way of defining and understanding trigonometry , the study of the metrical relationships between lengths and angles.
The three sides of a right triangle are related by the Pythagorean theorem , which in modern algebraic notation can be written
where c {\displaystyle c} is the length of the hypotenuse (side opposite the right angle), and a {\displaystyle a} and b {\displaystyle b} are the lengths of the legs (remaining two sides). Pythagorean triples are integer values of a , b , c {\displaystyle a,b,c} satisfying this equation. This theorem was proven in antiquity, and is proposition I.47 in Euclid's Elements : "In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle."
As with any triangle, the area is equal to one half the base multiplied by the corresponding height. In a right triangle, if one leg is taken as the base then the other is height, so the area of a right triangle is one half the product of the two legs. As a formula the area T {\displaystyle T} is
where a {\displaystyle a} and b {\displaystyle b} are the legs of the triangle.
If the incircle is tangent to the hypotenuse A B {\displaystyle AB} at point P , {\displaystyle P,} then letting the semi-perimeter be s = 1 2 ( a + b + c ) , {\displaystyle s={\tfrac {1}{2}}(a+b+c),} we have | P A | = s − a {\displaystyle |PA|=s-a} and | P B | = s − b , {\displaystyle |PB|=s-b,} and the area is given by
This formula only applies to right triangles. [ 1 ]
If an altitude is drawn from the vertex, with the right angle to the hypotenuse, then the triangle is divided into two smaller triangles; these are both similar to the original, and therefore similar to each other. From this:
In equations,
where a , b , c , d , e , f {\displaystyle a,b,c,d,e,f} are as shown in the diagram. [ 3 ] Thus
Moreover, the altitude to the hypotenuse is related to the legs of the right triangle by [ 4 ] [ 5 ]
For solutions of this equation in integer values of a , b , c , f , {\displaystyle a,b,c,f,} see here .
The altitude from either leg coincides with the other leg. Since these intersect at the right-angled vertex, the right triangle's orthocenter —the intersection of its three altitudes—coincides with the right-angled vertex.
The radius of the incircle of a right triangle with legs a {\displaystyle a} and b {\displaystyle b} and hypotenuse c {\displaystyle c} is
The radius of the circumcircle is half the length of the hypotenuse,
Thus the sum of the circumradius and the inradius is half the sum of the legs: [ 6 ]
One of the legs can be expressed in terms of the inradius and the other leg as
A triangle △ A B C {\displaystyle \triangle ABC} with sides a ≤ b < c {\displaystyle a\leq b<c} , semiperimeter s = 1 2 ( a + b + c ) {\textstyle s={\tfrac {1}{2}}(a+b+c)} , area T , {\displaystyle T,} altitude h c {\displaystyle h_{c}} opposite the longest side, circumradius R , {\displaystyle R,} inradius r , {\displaystyle r,} exradii r a , r b , r c {\displaystyle r_{a},r_{b},r_{c}} tangent to a , b , c {\displaystyle a,b,c} respectively, and medians m a , m b , m c {\displaystyle m_{a},m_{b},m_{c}} is a right triangle if and only if any one of the statements in the following six categories is true. Each of them is thus also a property of any right triangle.
The trigonometric functions for acute angles can be defined as ratios of the sides of a right triangle. For a given angle, a right triangle may be constructed with this angle, and the sides labeled opposite, adjacent and hypotenuse with reference to this angle according to the definitions above. These ratios of the sides do not depend on the particular right triangle chosen, but only on the given angle, since all triangles constructed this way are similar . If, for a given angle α, the opposite side, adjacent side and hypotenuse are labeled O , {\displaystyle O,} A , {\displaystyle A,} and H , {\displaystyle H,} respectively, then the trigonometric functions are
For the expression of hyperbolic functions as ratio of the sides of a right triangle, see the hyperbolic triangle of a hyperbolic sector .
The values of the trigonometric functions can be evaluated exactly for certain angles using right triangles with special angles. These include the 30-60-90 triangle which can be used to evaluate the trigonometric functions for any multiple of 1 6 π , {\displaystyle {\tfrac {1}{6}}\pi ,} and the isosceles right triangle or 45-45-90 triangle which can be used to evaluate the trigonometric functions for any multiple of 1 4 π . {\displaystyle {\tfrac {1}{4}}\pi .}
Let H , {\displaystyle H,} G , {\displaystyle G,} and A {\displaystyle A} be the harmonic mean , the geometric mean , and the arithmetic mean of two positive numbers a {\displaystyle a} and b {\displaystyle b} with a > b . {\displaystyle a>b.} If a right triangle has legs H {\displaystyle H} and G {\displaystyle G} and hypotenuse A , {\displaystyle A,} then [ 13 ]
where ϕ = 1 2 ( 1 + 5 ) {\displaystyle \phi ={\tfrac {1}{2}}{\bigl (}1+{\sqrt {5}}{\bigr )}} is the golden ratio . Since the sides of this right triangle are in geometric progression , this is the Kepler triangle .
Thales' theorem states that if B C {\displaystyle BC} is the diameter of a circle and A {\displaystyle A} is any other point on the circle, then △ A B C {\displaystyle \triangle ABC} is a right triangle with a right angle at A . {\displaystyle A.} The converse states that the hypotenuse of a right triangle is the diameter of its circumcircle . As a corollary, the circumcircle has its center at the midpoint of the diameter, so the median through the right-angled vertex is a radius, and the circumradius is half the length of the hypotenuse.
The following formulas hold for the medians of a right triangle:
The median on the hypotenuse of a right triangle divides the triangle into two isosceles triangles, because the median equals one-half the hypotenuse.
The medians m a {\displaystyle m_{a}} and m b {\displaystyle m_{b}} from the legs satisfy [ 6 ] : p.136, #3110
In a right triangle, the Euler line contains the median on the hypotenuse—that is, it goes through both the right-angled vertex and the midpoint of the side opposite that vertex. This is because the right triangle's orthocenter, the intersection of its altitudes, falls on the right-angled vertex while its circumcenter, the intersection of its perpendicular bisectors of sides , falls on the midpoint of the hypotenuse.
In any right triangle the diameter of the incircle is less than half the hypotenuse, and more strongly it is less than or equal to the hypotenuse times ( 2 − 1 ) . {\displaystyle ({\sqrt {2}}-1).} [ 14 ] : p.281
In a right triangle with legs a , b {\displaystyle a,b} and hypotenuse c , {\displaystyle c,}
with equality only in the isosceles case. [ 14 ] : p.282, p.358
If the altitude from the hypotenuse is denoted h c , {\displaystyle h_{c},} then
with equality only in the isosceles case. [ 14 ] : p.282
If segments of lengths p {\displaystyle p} and q {\displaystyle q} emanating from vertex C {\displaystyle C} trisect the hypotenuse into segments of length 1 3 c , {\displaystyle {\tfrac {1}{3}}c,} then [ 2 ] : pp. 216–217
The right triangle is the only triangle having two, rather than one or three, distinct inscribed squares. [ 15 ]
Given any two positive numbers h {\displaystyle h} and k {\displaystyle k} with h > k . {\displaystyle h>k.} Let h {\displaystyle h} and k {\displaystyle k} be the sides of the two inscribed squares in a right triangle with hypotenuse c . {\displaystyle c.} Then
These sides and the incircle radius r {\displaystyle r} are related by a similar formula:
The perimeter of a right triangle equals the sum of the radii of the incircle and the three excircles : | https://en.wikipedia.org/wiki/Right_triangle |
With the adoption of a new constitution in 2008 under president Rafael Correa , Ecuador became the first country in the world to enshrine a set of codified Rights of Nature and to inform a more clarified content to those rights. Articles 10 and Chapter 7, Articles 71–74 of the Ecuadorian Constitution recognize the inalienable rights of ecosystems to exist and flourish, give people the authority to petition on the behalf of nature, and requires the government to remedy violations of these rights.
Sumac kawsay , in Spanish , meaning "good living", rooted in the cosmovisión (or worldview ) of the Quechua peoples of the Andes , describes a way of life that is community-centric, ecologically -balanced and culturally-sensitive . [ 1 ] The concept is related to tradition of legal and political scholarship advocating legal standing for the natural environment. [ 2 ] The rights approach is a break away from traditional environmental regulatory systems, which regard nature as property. [ 3 ]
Ecuador's Rights of Nature embodies the indigenous sumak kawsay principles, giving Pachamama constitutional rights to protect and restore its environment.
President Rafael Correa entered into office in January 2007 with the help of La Revolución Ciudadana ( The Citizens' Revolution ) promising a new anti-neoliberalist Ecuador. A country that would unify and harmonize the broken relationships between the state, the economy, society, and its vital resources. [ 4 ] Being the eighth president in 10 years, Correa called for a Constitutional Assembly to create a new constitution for Ecuador.
Ecuador relies heavily on the income gained from exploiting its natural resources. The country's largest export, crude petroleum, represents 29% of Ecuador's GDP, coming in with a total value of $5.63 billion. [ 5 ] This has caused the country to suffer from vast deforestation in the Amazon, contaminated water and widespread illness.
Ecuador is also home to at least eight tribes of indigenous peoples, most of which reside in the Amazon, that have suffered from the negative environmental consequences of the extraction of oil. After several years of worsening economic and environmental conditions, uprisings from various indigenous communities, who found themselves receiving less support from the state, while simultaneously their land was being increasingly encroached upon by oil companies, brought attention to their concerns. [ 6 ] After historically being excluded from the political process, indigenous groups, especially concerned about the worsening environmental devastation of the extraction business and global climate change , started social movements aimed at creating a new approach to development that would protect the environment and harmonize its relationship with people. CONAIE (National Confederation of Indigenous Nationalities of Ecuador), the largest federation of indigenous movements focused on social justice began lobbying for a new constitution that incorporated recognition of the nation's indigenous groups, their language, culture, history, and land rights, and inherently their concepts of sumak kawsay and Pachamama (English: "Mother Nature"). [ 7 ]
The economy, based on the exportation of the country's raw materials, mainly oil, was also wreaking havoc on the nation's environment, an area with valuable biological and cultural diversity. [ 6 ] The global economic crisis of 2008 revealed the vulnerabilities of an extractive economy, and led to a period of political turmoil in the country that made obvious the need for a new more inclusive government that embodied a post-oil, post-neoliberal development paradigm. [ 7 ] [ 8 ] In late 2006, the election of leftist Rafael Correa, who ran on an anti-neoliberal platform, showed the emergence of a new political era for Ecuador. [ 6 ]
Buen vivir ("good living") emerged as a response to the traditional strategies for development and their negative environmental, social, or economic effects. Sumak kawsay , meaning a full life and signifies living in harmony with other people and nature. Buen Vivir has gained new popularity, spreading throughout parts of South America and evolving as a multicultural concept. The constitution outlines Buen Vivir as a set of rights, one of which is the rights of nature. [ 9 ] In line with the assertion of these rights, Buen Vivir changes the relationship between nature and humans to a more bio-pluralistic view, eliminating the separation between nature and society. [ 9 ] [ 10 ]
In Andean communities in Latin America development is expressed through the notion of sumak kawsay , the Quechua word for "buen vivir", has been proposed as an alternative conception of development and has been incorporated into the constitutions of Ecuador . It connotes a harmonious collective development that conceives of the individual within the context of the social and cultural communities and his or her natural environment. [ 11 ] Rooted in the indigenous belief system of the Quechua, the concept incorporates western critiques of dominant development models to offer an alternative paradigm based on harmony between human beings including the natural environment.
The rights of nature are not a new concept. Christopher Stone is widely credited with creating its first written work. In his famous book, "Should Trees Have Standing?", [ 2 ] Stone presented the case for conferring legal personality and rights on the environment. As Stone explained, the natural object would “have a legally recognized worth and dignity in its own right, and not merely to serve as a means to benefit ‘us’”. [ 2 ] He also pointed out that just like "streams and forests" do not have the power to speak for themselves, neither do corporations, or states, infants, municipalities and universities. "Lawyers speak for them, as they customarily do for the ordinary citizen with legal problems." [ 2 ] [ 12 ]
Environmental activist and President of the Constitutional Assembly, Alberto Acosta published Nature as a Subject of Rights [ 13 ] that first brought attention to the idea to the public and the government. Acosta proclaimed the rights of nature as a concept of historical progressivism. He compares it to when women were not thought of as subjects until they in fact became subjects of rights - so nature does not seem palpable in having rights-status until the concept is brought up and realized. [ 14 ] The subject of rights are altogether inconstruable unless the concept is put into perspective and does in fact become a subject that is arguable, agreeable, or simply just talk-able.
President Rafael Correa included calling for a Constitutional Assembly in his 2006 campaign. On April 15, 2007, over 80% of Ecuadorians voted in favor of calling a new assembly, thanks in large part to the support of indigenous communities. Indigenous groups had been pressuring for a new, more inclusive constitution for years, and were therefore actively involved in the drafting process. Alberto Acosta, the elected Assembly President, pledged to make the assembly more inclusive and incorporate the concerns of the indigenous into the constitution. In the end, a few indigenous representatives were elected to the assembly. [ 7 ] To create a constitution based on the principles of Buen Vivir , the Constitutional Assembly, with the advice of the Pachamama Alliance, enlisted the help of the Community Environmental Legal Defense Fund (CELDF) to draft language for the new provisions of the constitution detailing the Rights of Nature. [ 3 ] Specifically the US lawyers Mari Margil (associate director) and Thomas Linzey (executive director) were asked to use their experience to help the Ecuadorean environmental groups draft the amendments. Indigenous groups also played a role in the drafting process. Fundación Pachamama, in conjunction with leaders in CONAIE, met with members of the assembly to present their ideas for the constitution and gain support. A national media campaign detailing the tenets of the new constitution and the Rights of Nature was also launched to inform and gain support from the public. [ 15 ]
Multiple roundtables were held in order to discuss the feasibility of adding the Rights of Nature to the constitution. An important argument would be that of consent vs consultation. The indigenous communities and some members of the Constitutional Assembly advocated for a right to consent , meaning they wanted a clear right to oppose or approve development projects, whereas the government opposed with promoting only consultation . In conclusion, the government stance prevailed and Article 408 confirms that all natural resources are the state's property. The state can decide to exploit any natural resources that it recognizes to be of national importance, solely as long as it consults the affected communities, without having any obligation to an agreement. [ 14 ]
Assembly members Guillem Humberto and Ortiz Alfredo contended for the creation of an Ombudsman for Pachamama. This would replace the Minister of Environment, who was seen as inadequate, and would act as a legal guardian of nature's rights. In the end, Nature's Ombudsman was not added into the new constitution. [ 14 ]
In the end there were many reasons for wanting the Rights of Nature. As previously mentioned, indigenous groups, specifically the four members of the Pachakutik within the Constitutional Assembly, advocated for judiciary rights of their communities' way of living. A less sincere take would be that illustrated by Rafael Esteves, a member of the populist right. He said that it was greatly known that they would be the first to give nature its legal rights within their constitution, the mere fact of this is what drove some members into agreement with its passing.
On April 10, 2008, with 91 votes out of 130, the Constitutional Assembly approved Article 10 for inclusion in the new constitution. On June 7, the language of Articles 71 through 74, compiling the Rights of Nature, were presented and debated on, before receiving approval for inclusion in the constitution. [ 15 ]
On September 28, 2008, a mandatory referendum was held to vote on the new constitution, where the adoption of the constitution was approved by 65% of voters. [ 16 ]
The following articles are found under Title II: Rights in the Constitution of the Republic of Ecuador published in the Official Register on October 20, 2008.
Article 10. Persons, communities, peoples, nations and communities are bearers of rights and shall enjoy the rights guaranteed to them in the Constitution and in international instruments.
Nature shall be the subject of those rights that the Constitution recognizes for it.
Article 71. Nature, or Pacha Mama, where life is reproduced and occurs, has the right to integral respect for its existence and for the maintenance and regeneration of its life cycles, structure, functions and evolutionary processes.
All persons, communities, peoples and nations can call upon public authorities to enforce the rights of nature. To enforce and interpret these rights, the principles set forth in the Constitution shall be observed, as appropriate.
The State shall give incentives to natural persons and legal entities and to communities to protect nature and to promote respect for all the elements comprising an ecosystem.
Article 72. Nature has the right to be restored. This restoration shall be apart from the obligation of the State and natural persons or legal entities to compensate individuals and communities that depend on affected natural systems.
In those cases of severe or permanent environmental impact, including those caused by the exploitation of nonrenewable natural resources, the State shall establish the most effective mechanisms to achieve the restoration and shall adopt adequate measures to eliminate or mitigate harmful environmental consequences.
Article 73. The State shall apply preventive and restrictive measures on activities that might lead to the extinction of species, the destruction of ecosystems and the permanent alteration of natural cycles.
The introduction of organisms and organic and inorganic material that might definitively alter the nation's genetic assets is forbidden.
Article 74. Persons, communities, peoples, and nations shall have the right to benefit from the environment and the natural wealth enabling them to enjoy the good way of living.
Environmental services shall not be subject to appropriation; their production, delivery, use and development shall be regulated by the State. [ 17 ]
Ecuador's codification of the Rights of Nature is significant as it is the first case where this concept has been evoked at the national level. The articles set out a rights-based system that recognizes Nature, or Pachamama , as a right-bearing entity that holds value in itself, apart from human use. This differs from traditional systems that see nature as property, giving landowners the right to damage or destroy ecosystems that depend on their land. The rights-based approach spelled out in the Rights of Nature expands on previous laws for regulation and conservation by recognizing that nature has fundamental and inalienable rights as a valuable entity in and of itself. The system also assigns liability for damage to the environment and holds the government responsible for the reparation of any damage. Additionally, if an ecosystem's rights are violated, it gives people the authority to petition on behalf of the ecosystem to ensure that its interests are not subverted to the interests of individuals or corporations. [ 3 ] [ 8 ]
The inclusion of the Rights of Nature also makes the constitution more democratic and inclusive, as it reflects the indigenes' idea of Nature as a mother that must be respected and celebrated. This is the first constitution that has incorporated indigenous concepts of sumak kawsay and Pachamama , as well as recognized the plurinationality of Ecuador. This has broad significance for the recognition of indigenous groups and their right to preserve their land and culture. The combination of human rights with the rights of nature will allow for more effective protection of indigenous communities. [ 15 ]
The Rights of Nature are further incorporated in the updated National Plan for Good Living, which states guaranteeing the Rights of Nature and promoting a healthy and sustainable environment as one of its twelve objectives. Policies under the objective include aims to preserve and manage biodiversity, diversify the national energy matrix with renewable sources, prevent, control and mitigate environmental damage, promote adaptation to and mitigation of climate change , and incorporate environmental approach in all public policies. [ 10 ]
The environmental laws that exist in the Anthropocene are geared toward the advantage of the human race. The rules to protect the environment are set for the health and wellness of humanity. Nature is seen as a property to humans.
The Anthropocene is catered to humans of privilege. Throughout history, it is evident that the government and powerful people have specified an otherness. "Like women, homosexuals and non-whites, nature is 'othered' by people through privileging law and rights that distinguish between subject and object." [ 12 ]
Ecuador has taken action towards an ecocentric influenced constitution, giving nature legal, constitutional rights. This means that Ecuador has recognized nature, or Pachamama, as a capable and deserving right-bearing entity that is equal to humans. The Rights of Nature also transforms the relationship between nature and humans by asserting that nature is not just an object. [ 18 ] By putting ecosystems on an equal footing with humans, the conception of humans as masters or as separate from nature is dismissed. [ 9 ] Instead, this system celebrates nature and recognizes that humans are a part of it. [ 6 ] Many have thought this to be part of progressivism and related it to other examples of progression such as homosexual rights and the rights of women. [ citation needed ]
The Rights of Nature has been applied to several legal disputes and considered in government development initiatives.
Wheeler c. Director de la Procuraduria General Del Estado de Loja was the first case in history to vindicate the Rights of Nature. The lawsuit was filed against the local government near Rio Vilcabamba in March 2011, who were responsible for a road expansion project that dumped debris into the river, narrowing its width and thereby doubling its speed. The project was also done without the completion of an environmental impact assessment or consent of the local residents. The case was filed by two such residents, citing the violation of the Rights of Nature, rather than property rights, for the damage done to the river. The case was important because the court stated that the rights of nature would prevail over other constitutional rights if they were in conflict with each other, setting an important precedent. The proceedings also confirmed that the burden of proof to show there is no damage lies with the defendant. Though the plaintiffs were granted a victory in court, the enforcement of the ruling has been lacking, as the local government has been slow to comply with the mandated reparations. [ 19 ]
In March 2011, right after the ruling on the Wheeler case, the government of Ecuador filed a case against illegal gold mining operations in northern Ecuador, in the remote districts San Lorenzo and Eloy Alfaro. The rights of nature were violated by the mining operations, which were argued to be polluting the nearby rivers. This case is different from the previous in that it was the government addressing the violation of the rights of nature. It was also swiftly enforced, as military operation to destroy the machinery used for illegal mining was ordered and implemented. [ 19 ]
The Yasuni-Ishpingo, Tambococha, and Tiputini (ITT) Initiative, referring to the corridor of oil reserves within the Yasuni National Park , is the first post-oil development initiative that recognizes that the benefits gained from the Amazon are greater than the economic benefits from oil extraction. The aim of the initiative is therefore to protect the biodiversity of the area, which UNESCO has declared a biodiversity reserve, by keeping the oil reserves in the ground, in return for compensation from the international community for at least half of the projected benefit Ecuador would receive from the oil extraction (approximately $3.5 billion). These funds would be used to fund other economic initiatives to alleviate poverty and develop the renewable energy sector. The importance of keeping the oil in the ITT area in the ground has been argued as of international importance to mitigate the effects of global climate change by preventing CO 2 emissions and the local environmental devastation the extraction would cause. The Rights of Nature and other articles of the new constitution also make the protection of the park a legal imperative, as the extraction would be a violation of nature's rights. [ 8 ] Though there originally was some difficulty evoking a sense of international responsibility to fund the initiative, especially with the national constitution requiring this law already, eventually in August 2010 Ecuador came to an arrangement with the UNDP for funding of the initiative through the issue of Yasuní Guarantee Certificates, denoting the amount of CO 2 emissions avoided and their monetary value, which can potentially be used in the European Union Emission Trading Scheme . [ 6 ]
In 2021, in a landmark ruling, the constitutional court of Ecuador decided that mining permits for plans to mine for copper and gold in the protected cloud forest in Los Cedros, would harm the biodiversity and violate the rights of nature, and would be unconstitutional. [ 20 ]
The adoption of the Rights of Nature by Ecuador has received praise internationally by many countries who see this as a revolutionary way to conceptualize the environment and a way for Ecuador to move beyond the extractive economy of its past. [ 21 ] Initiatives to adopt the concept of ecosystem rights have been taken or are being taken in various parts of the world, including Bolivia, Turkey, Nepal, and various municipalities in the United States. [ 3 ] In 2010, Bolivia adopted the Law of the Rights of Mother Earth to recognize rights of nature at the national level. The Colombian Constitutional and Supreme Courts recognized rights for the Atrato river and Amazon ecosystem in 2016 and 2018, respectively. [ 22 ]
Criticisms of the Rights of Nature have generally centered on the mechanisms of enforcement of the provision. One criticism is that though the constitution establishes stronger regulations for the environment, it also gives the state the power to relax these regulations if found to be in the national interest. [ 6 ] Therefore, much of the enforcement of the ecosystem's rights depends on the will of the government, or an active citizenry. [ 7 ] Indigenous groups have also expressed dissatisfaction that the constitution does not give local communities veto power over projects affecting their land. [ 15 ] The amendments only call for consultation of the projects, rather than consent by the surrounding communities, which can undermine their ability to uphold the rights of nature. [ 18 ] There are also concerns that the Rights of Nature could negatively affect foreign direct investment since companies will not want to comply with the more stringent regulations. [ 21 ] On the other hand, people are skeptical of the Correa administration for still approving projects by foreign extraction companies violating the Rights of Nature. [ 18 ] This skepticism comes from the history of corruption within the Ecuadorean government. As well as the facts that Correa is shutting down any environmental groups that stand for the Rights of Nature such as the Accion Ecologica (AE) as well as the Development Council of the Indigenous Nationalities and Peoples of Ecuador (CODENPE). [ 23 ]
There is much criticism on the text itself of the Rights of Nature, specifically on their content and structure. Some argue the controversy or clashing of articles and the lack of hierarchy between them. There is no clear understanding whether human's constitutional rights or nature's constitutional rights has more power. Another thing would be the vagueness of the text that leaves a lot of important factors without specific definition. Ecuador does not define "la naturaleza" or "Pachamama," making the extents of the groups involved unclear. It also leaves to question who is given judicial standing to represent nature, and who is going to enforce those rights. Along those same lines, the extent of protection or remediation is unspecified. [ 23 ]
This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0. Text taken from Rethinking Education. Towards a Global Common Good? , p32, UNESCO. | https://en.wikipedia.org/wiki/Rights_of_nature_in_Ecuador |
In structural engineering , a rigid frame is the load-resisting skeleton constructed with straight or curved members interconnected by predominantly rigid connections, which resist movements induced at the joints of members. Its members can resist bending moment, shear , and axial loads.
The two common assumptions as to the behavior of a building frame are (1) that its beams are free to rotate at their connections or (2) that its members are so connected that the angles they make with each other do not change under load. Frameworks with connections of intermediate stiffness will be intermediate between these two extremes. They are commonly called semirigid frames. The AISC specifications recognize three basic frame types: rigid frame, simple frame, and partially restrained frame. [ 1 ]
The AISC Steel Specification Commentary on Section B3 provides guidance for the classification of a connection in terms of its rigidity . The secant stiffness of the connection K s is taken as an index property of connection stiffness. Specifically,
K s = M s /θ s where M s = moment at service loads, kip-in (N-mm) θ s = rotation at service loads, rads [ 2 ]
The secant stiffness of the connection is compared to the rotational stiffness of the connected member as follows, in which L and EI are the length and bending rigidity, respectively, of the beam.
If K s L/EI ≥ 20, it is acceptable to consider the connection to be fully restrained (in other words, able to maintain the angles between members). If K s L/EI ≤ 2, it is acceptable to consider the connection to be simple (in other words, it rotates without developing moment). Connections with stiffnesses between these two limits are partially restrained and the stiffness, strength and ductility of the connection must be considered in the design. [ 2 ]
This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it .
This architecture -related article is a stub . You can help Wikipedia by expanding it .
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rigid_frame |
In rotordynamics , the rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object , such as a top . To orient such an object in space requires three angles, known as Euler angles . A special rigid rotor is the linear rotor requiring only two angles to describe, for example of a diatomic molecule . More general molecules are 3-dimensional, such as water (asymmetric rotor), ammonia (symmetric rotor), or methane (spherical rotor).
The linear rigid rotor model consists of two point masses located at fixed distances from their center of mass. The fixed distance between the two masses and the values of the masses are the only characteristics of the rigid model. However, for many actual diatomics this model is too restrictive since distances are usually not completely fixed. Corrections on the rigid model can be made to compensate for small variations in the distance. Even in such a case the rigid rotor model is a useful point of departure (zeroth-order model).
The classical linear rotor consists of two point masses m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} (with reduced mass μ = m 1 m 2 m 1 + m 2 {\textstyle \mu ={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}} ) at a distance R {\displaystyle R} of each other. The rotor is rigid if R {\displaystyle R} is independent of time. The kinematics of a linear rigid rotor is usually described by means of spherical polar coordinates , which form a coordinate system of R 3 . In the physics convention the coordinates are the co-latitude (zenith) angle θ {\displaystyle \theta \,} , the longitudinal (azimuth) angle φ {\displaystyle \varphi \,} and the distance R {\displaystyle R} . The angles specify the orientation of the rotor in space. The kinetic energy T {\displaystyle T} of the linear rigid rotor is given by 2 T = μ R 2 [ θ ˙ 2 + ( φ ˙ sin θ ) 2 ] = μ R 2 ( θ ˙ φ ˙ ) ( 1 0 0 sin 2 θ ) ( θ ˙ φ ˙ ) = μ ( θ ˙ φ ˙ ) ( h θ 2 0 0 h φ 2 ) ( θ ˙ φ ˙ ) , {\displaystyle {\begin{aligned}2T&=\mu R^{2}\left[{\dot {\theta }}^{2}+({\dot {\varphi }}\,\sin \theta )^{2}\right]\\[1ex]&=\mu R^{2}{\begin{pmatrix}{\dot {\theta }}&{\dot {\varphi }}\end{pmatrix}}{\begin{pmatrix}1&0\\0&\sin ^{2}\theta \\\end{pmatrix}}{\begin{pmatrix}{\dot {\theta }}\\{\dot {\varphi }}\end{pmatrix}}\\[1ex]&=\mu {\begin{pmatrix}{\dot {\theta }}&{\dot {\varphi }}\end{pmatrix}}{\begin{pmatrix}h_{\theta }^{2}&0\\0&h_{\varphi }^{2}\\\end{pmatrix}}{\begin{pmatrix}{\dot {\theta }}\\{\dot {\varphi }}\end{pmatrix}},\end{aligned}}}
where h θ = R {\displaystyle h_{\theta }=R\,} and h φ = R sin θ {\displaystyle h_{\varphi }=R\sin \theta \,} are scale (or Lamé) factors .
Scale factors are of importance for quantum mechanical applications since they enter the Laplacian expressed in curvilinear coordinates . In the case at hand (constant R {\displaystyle R} ) ∇ 2 = 1 h θ h φ [ ∂ ∂ θ h φ h θ ∂ ∂ θ + ∂ ∂ φ h θ h φ ∂ ∂ φ ] = 1 R 2 [ 1 sin θ ∂ ∂ θ sin θ ∂ ∂ θ + 1 sin 2 θ ∂ 2 ∂ φ 2 ] . {\displaystyle {\begin{aligned}\nabla ^{2}&={\frac {1}{h_{\theta }h_{\varphi }}}\left[{\frac {\partial }{\partial \theta }}{\frac {h_{\varphi }}{h_{\theta }}}{\frac {\partial }{\partial \theta }}+{\frac {\partial }{\partial \varphi }}{\frac {h_{\theta }}{h_{\varphi }}}{\frac {\partial }{\partial \varphi }}\right]\\&={\frac {1}{R^{2}}}\left[{\frac {1}{\sin \theta }}{\frac {\partial }{\partial \theta }}\sin \theta {\frac {\partial }{\partial \theta }}+{\frac {1}{\sin ^{2}\theta }}{\frac {\partial ^{2}}{\partial \varphi ^{2}}}\right].\end{aligned}}}
The classical Hamiltonian function of the linear rigid rotor is H = 1 2 μ R 2 [ p θ 2 + p φ 2 sin 2 θ ] . {\displaystyle H={\frac {1}{2\mu R^{2}}}\left[p_{\theta }^{2}+{\frac {p_{\varphi }^{2}}{\sin ^{2}\theta }}\right].}
The linear rigid rotor model can be used in quantum mechanics to predict the rotational energy of a diatomic molecule. The rotational energy depends on the moment of inertia for the system, I {\displaystyle I} . In the center of mass reference frame, the moment of inertia is equal to:
I = μ R 2 {\displaystyle I=\mu R^{2}}
where μ {\displaystyle \mu } is the reduced mass of the molecule and R {\displaystyle R} is the distance between the two atoms.
According to quantum mechanics , the energy levels of a system can be determined by solving the Schrödinger equation :
H ^ Ψ = E Ψ {\displaystyle {\hat {H}}\Psi =E\Psi }
where Ψ {\displaystyle \Psi } is the wave function and H ^ {\displaystyle {\hat {H}}} is the energy ( Hamiltonian ) operator. For the rigid rotor in a field-free space, the energy operator corresponds to the kinetic energy [ 1 ] of the system:
H ^ = − ℏ 2 2 μ ∇ 2 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}}
where ℏ {\displaystyle \hbar } is reduced Planck constant and ∇ 2 {\displaystyle \nabla ^{2}} is the Laplacian . The Laplacian is given above in terms of spherical polar coordinates. The energy operator written in terms of these coordinates is:
H ^ = − ℏ 2 2 I [ 1 sin θ ∂ ∂ θ ( sin θ ∂ ∂ θ ) + 1 sin 2 θ ∂ 2 ∂ φ 2 ] {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2I}}\left[{1 \over \sin \theta }{\partial \over \partial \theta }\left(\sin \theta {\partial \over \partial \theta }\right)+{1 \over {\sin ^{2}\theta }}{\partial ^{2} \over \partial \varphi ^{2}}\right]}
This operator appears also in the Schrödinger equation of the hydrogen atom after the radial part is separated off. The eigenvalue equation becomes H ^ Y ℓ m ( θ , φ ) = ℏ 2 2 I ℓ ( ℓ + 1 ) Y ℓ m ( θ , φ ) . {\displaystyle {\hat {H}}Y_{\ell }^{m}(\theta ,\varphi )={\frac {\hbar ^{2}}{2I}}\ell (\ell +1)\,Y_{\ell }^{m}(\theta ,\varphi ).} The symbol Y ℓ m ( θ , φ ) {\displaystyle Y_{\ell }^{m}(\theta ,\varphi )} represents a set of functions known as the spherical harmonics . Note that the energy does depend on m {\displaystyle m\,} through I . The energy E ℓ = ℏ 2 2 I ℓ ( ℓ + 1 ) {\displaystyle E_{\ell }={\hbar ^{2} \over 2I}\ell \left(\ell +1\right)} is 2 ℓ + 1 {\displaystyle 2\ell +1} -fold degenerate: the functions with fixed ℓ {\displaystyle \ell } and m = − ℓ , − ℓ + 1 , … , ℓ {\displaystyle m=-\ell ,-\ell +1,\dots ,\ell } have the same energy.
Introducing the rotational constant B {\displaystyle B} , we write, E ℓ = B ℓ ( ℓ + 1 ) with B ≡ ℏ 2 2 I . {\displaystyle E_{\ell }=B\;\ell \left(\ell +1\right)\quad {\textrm {with}}\quad B\equiv {\frac {\hbar ^{2}}{2I}}.} In the units of reciprocal length the rotational constant is, B ¯ ≡ B h c = h 8 π 2 c I = ℏ 4 π c μ R e 2 , {\displaystyle {\bar {B}}\equiv {\frac {B}{hc}}={\frac {h}{8\pi ^{2}cI}}={\frac {\hbar }{4\pi c\mu R_{e}^{2}}},} with c the speed of light. If cgs units are used for h {\displaystyle h} , c {\displaystyle c} , and I {\displaystyle I} , B ¯ {\displaystyle {\bar {B}}} is expressed in cm −1 , or wave numbers , which is a unit that is often used for rotational-vibrational spectroscopy. The rotational constant B ¯ ( R ) {\displaystyle {\bar {B}}(R)} depends on the distance R {\displaystyle R} . Often one writes B e = B ¯ ( R e ) {\displaystyle B_{e}={\bar {B}}(R_{e})} where R e {\displaystyle R_{e}} is the equilibrium value of R {\displaystyle R} (the value for which the interaction energy of the atoms in the rotor has a minimum).
A typical rotational absorption spectrum consists of a series of peaks that correspond to transitions between levels with different values of the angular momentum quantum number ( ℓ {\displaystyle \ell } ) such that Δ l = + 1 {\displaystyle \Delta l=+1} , due to the selection rules (see below). Consequently, rotational peaks appear at energies with differences corresponding to an integer multiple of 2 B ¯ {\displaystyle 2{\bar {B}}} .
Rotational transitions of a molecule occur when the molecule absorbs a photon [a particle of a quantized electromagnetic (em) field]. Depending on the energy of the photon (i.e., the wavelength of the em field) this transition may be seen as a sideband of a vibrational and/or
electronic transition. Pure rotational transitions, in which the vibronic (= vibrational plus electronic) wave function does not change, occur in the microwave region of the electromagnetic spectrum.
Typically, rotational transitions can only be observed when the angular momentum quantum number changes by 1 {\displaystyle 1} ( Δ l = ± 1 ) {\displaystyle (\Delta l=\pm 1)} . This selection rule arises from a first-order perturbation theory approximation of the time-dependent Schrödinger equation . According to this treatment, rotational transitions can only be observed when one or more components of the dipole operator have a non-vanishing transition moment. If z {\displaystyle z} is the direction of the electric field component of the incoming electromagnetic wave, the transition moment is, ⟨ ψ 2 | μ z | ψ 1 ⟩ = ( μ z ) 21 = ∫ ψ 2 ∗ μ z ψ 1 d τ . {\displaystyle \langle \psi _{2}|\mu _{z}|\psi _{1}\rangle =\left(\mu _{z}\right)_{21}=\int \psi _{2}^{*}\mu _{z}\psi _{1}\,\mathrm {d} \tau .}
A transition occurs if this integral is non-zero. By separating the rotational part of the molecular wavefunction from the vibronic part, one can show that this means that the molecule must have a permanent dipole moment . After integration over the vibronic coordinates the following rotational part of the transition moment remains,
( μ z ) l , m ; l ′ , m ′ = μ ∫ 0 2 π d ϕ ∫ − 1 1 Y l ′ m ′ ( θ , ϕ ) ∗ cos θ Y l m ( θ , ϕ ) d ( cos θ ) . {\displaystyle \left(\mu _{z}\right)_{l,m;l',m'}=\mu \int _{0}^{2\pi }\mathrm {d} \phi \int _{-1}^{1}Y_{l'}^{m'}{\!\left(\theta ,\phi \right)}^{*}\cos \theta \;Y_{l}^{m}{\!\left(\theta ,\phi \right)}\;\mathrm {d} (\cos \theta ).} Here μ cos θ {\displaystyle \mu \cos \theta \,} is the z component of the permanent dipole moment. The moment μ {\displaystyle \mu } is the vibronically averaged component of the dipole operator . Only the component of the permanent dipole along the axis of a heteronuclear molecule is non-vanishing.
By the use of the orthogonality of the spherical harmonics Y l m ( θ , ϕ ) {\displaystyle Y_{l}^{m}\,\left(\theta ,\phi \right)} it is possible to determine which values of l {\displaystyle l} , m {\displaystyle m} , l ′ {\displaystyle l'} , and m ′ {\displaystyle m'} will result in nonzero values for the dipole transition moment integral. This constraint results in the observed selection rules for the rigid rotor:
Δ m = 0 and Δ l = ± 1 {\displaystyle \Delta m=0\quad {\hbox{and}}\quad \Delta l=\pm 1}
The rigid rotor is commonly used to describe the rotational energy of diatomic molecules but it is not a completely accurate description of such molecules. This is because molecular bonds (and therefore the interatomic distance R {\displaystyle R} ) are not completely fixed; the bond between the atoms stretches out as the molecule rotates faster (higher values of the rotational quantum number l {\displaystyle l} ). This effect can be accounted for by introducing a correction factor known as the centrifugal distortion constant D ¯ {\displaystyle {\bar {D}}} (bars on top of various quantities indicate that these quantities are expressed in cm −1 ):
E ¯ l = E l h c = B ¯ l ( l + 1 ) − D ¯ l 2 ( l + 1 ) 2 {\displaystyle {\bar {E}}_{l}={E_{l} \over hc}={\bar {B}}l\left(l+1\right)-{\bar {D}}l^{2}\left(l+1\right)^{2}}
where
The non-rigid rotor is an acceptably accurate model for diatomic molecules but is still somewhat imperfect. This is because, although the model does account for bond stretching due to rotation, it ignores any bond stretching due to vibrational energy in the bond (anharmonicity in the potential).
An arbitrarily shaped rigid rotor is a rigid body of arbitrary shape with its center of mass fixed (or in uniform rectilinear motion) in field-free space R 3 , so that its energy consists only of rotational kinetic energy (and possibly constant translational energy that can be ignored). A rigid body can be (partially) characterized by the three eigenvalues of its moment of inertia tensor , which are real nonnegative values known as principal moments of inertia .
In microwave spectroscopy —the spectroscopy based on rotational transitions—one usually classifies molecules (seen as rigid rotors) as follows:
This classification depends on the relative magnitudes of the principal moments of inertia.
Different branches of physics and engineering use different coordinates for the description of the kinematics of a rigid rotor. In molecular physics Euler angles are used almost exclusively. In quantum mechanical applications it is advantageous to use Euler angles in a convention that is a simple extension of the physical convention of spherical polar coordinates .
The first step is the attachment of a right-handed orthonormal frame (3-dimensional system of orthogonal axes) to the rotor (a body-fixed frame ) . This frame can be attached arbitrarily to the body, but often one uses the principal axes frame—the normalized eigenvectors of the inertia tensor, which always can be chosen orthonormal, since the tensor is symmetric . When the rotor possesses a symmetry-axis, it usually coincides with one of the principal axes. It is convenient to choose
as body-fixed z -axis the highest-order symmetry axis.
One starts by aligning the body-fixed frame with a space-fixed frame (laboratory axes), so that the body-fixed x , y , and z axes coincide with the space-fixed X , Y , and Z axis. Secondly, the body and its frame are rotated actively over a positive angle α {\displaystyle \alpha \,} around the z -axis (by the right-hand rule ), which moves the y {\displaystyle y} - to the y ′ {\displaystyle y'} -axis. Thirdly, one rotates the body and its frame over a positive angle β {\displaystyle \beta \,} around the y ′ {\displaystyle y'} -axis. The z -axis of the body-fixed frame has after these two rotations the longitudinal angle α {\displaystyle \alpha \,} (commonly designated by φ {\displaystyle \varphi \,} ) and the colatitude angle β {\displaystyle \beta \,} (commonly designated by θ {\displaystyle \theta \,} ), both with respect to the space-fixed frame. If the rotor were cylindrical symmetric around its z -axis, like the linear rigid rotor, its orientation in space would be unambiguously specified at this point.
If the body lacks cylinder (axial) symmetry, a last rotation around its z -axis (which has polar coordinates β {\displaystyle \beta \,} and α {\displaystyle \alpha \,} ) is necessary to specify its orientation completely. Traditionally the last rotation angle is called γ {\displaystyle \gamma \,} .
The convention for Euler angles described here is known as the z ″ − y ′ − z {\displaystyle z''-y'-z} convention; it can be shown (in the same manner as in this article ) that it is equivalent to the z − y − z {\displaystyle z-y-z} convention in which the order of rotations is reversed.
The total matrix of the three consecutive rotations is the product
R ( α , β , γ ) = ( cos α − sin α 0 sin α cos α 0 0 0 1 ) ( cos β 0 sin β 0 1 0 − sin β 0 cos β ) ( cos γ − sin γ 0 sin γ cos γ 0 0 0 1 ) {\displaystyle \mathbf {R} (\alpha ,\beta ,\gamma )={\begin{pmatrix}\cos \alpha &-\sin \alpha &0\\\sin \alpha &\cos \alpha &0\\0&0&1\end{pmatrix}}{\begin{pmatrix}\cos \beta &0&\sin \beta \\0&1&0\\-\sin \beta &0&\cos \beta \\\end{pmatrix}}{\begin{pmatrix}\cos \gamma &-\sin \gamma &0\\\sin \gamma &\cos \gamma &0\\0&0&1\end{pmatrix}}}
Let r ( 0 ) {\displaystyle \mathbf {r} (0)} be the coordinate vector of an arbitrary point P {\displaystyle {\mathcal {P}}} in the body with respect to the body-fixed frame. The elements of r ( 0 ) {\displaystyle \mathbf {r} (0)} are the 'body-fixed coordinates' of P {\displaystyle {\mathcal {P}}} . Initially r ( 0 ) {\displaystyle \mathbf {r} (0)} is also the space-fixed coordinate vector of P {\displaystyle {\mathcal {P}}} . Upon rotation of the body, the body-fixed coordinates of P {\displaystyle {\mathcal {P}}} do not change, but the space-fixed coordinate vector of P {\displaystyle {\mathcal {P}}} becomes, r ( α , β , γ ) = R ( α , β , γ ) r ( 0 ) . {\displaystyle \mathbf {r} (\alpha ,\beta ,\gamma )=\mathbf {R} (\alpha ,\beta ,\gamma )\mathbf {r} (0).} In particular, if P {\displaystyle {\mathcal {P}}} is initially on the space-fixed Z -axis, it has the space-fixed coordinates R ( α , β , γ ) ( 0 0 r ) = ( r cos α sin β r sin α sin β r cos β ) , {\displaystyle \mathbf {R} (\alpha ,\beta ,\gamma ){\begin{pmatrix}0\\0\\r\\\end{pmatrix}}={\begin{pmatrix}r\cos \alpha \sin \beta \\r\sin \alpha \sin \beta \\r\cos \beta \\\end{pmatrix}},} which shows the correspondence with the spherical polar coordinates (in the physical convention).
Knowledge of the Euler angles as function of time t and the initial coordinates r ( 0 ) {\displaystyle \mathbf {r} (0)} determine the kinematics of the rigid rotor.
The following text forms a generalization of the well-known special case of the rotational energy of an object that rotates around one axis.
It will be assumed from here on that the body-fixed frame is a principal axes frame; it diagonalizes the instantaneous inertia tensor I ( t ) {\displaystyle \mathbf {I} (t)} (expressed with respect to the space-fixed frame), i.e., R ( α , β , γ ) − 1 I ( t ) R ( α , β , γ ) = I ( 0 ) with I ( 0 ) = ( I 1 0 0 0 I 2 0 0 0 I 3 ) , {\displaystyle \mathbf {R} (\alpha ,\beta ,\gamma )^{-1}\;\mathbf {I} (t)\;\mathbf {R} (\alpha ,\beta ,\gamma )=\mathbf {I} (0)\quad {\hbox{with}}\quad \mathbf {I} (0)={\begin{pmatrix}I_{1}&0&0\\0&I_{2}&0\\0&0&I_{3}\\\end{pmatrix}},} where the Euler angles are time-dependent and in fact determine the time dependence of I ( t ) {\displaystyle \mathbf {I} (t)} by the inverse of this equation. This notation implies
that at t = 0 {\displaystyle t=0} the Euler angles are zero, so that at t = 0 {\displaystyle t=0} the body-fixed frame coincides with the space-fixed frame.
The classical kinetic energy T of the rigid rotor can be expressed in different ways:
Since each of these forms has its use and can be found in textbooks we will present all of them.
As a function of angular velocity T reads, T = 1 2 [ I 1 ω x 2 + I 2 ω y 2 + I 3 ω z 2 ] {\displaystyle T={\tfrac {1}{2}}\left[I_{1}\omega _{x}^{2}+I_{2}\omega _{y}^{2}+I_{3}\omega _{z}^{2}\right]} with ( ω x ω y ω z ) = ( − sin β cos γ sin γ 0 sin β sin γ cos γ 0 cos β 0 1 ) ( α ˙ β ˙ γ ˙ ) . {\displaystyle {\begin{pmatrix}\omega _{x}\\\omega _{y}\\\omega _{z}\\\end{pmatrix}}={\begin{pmatrix}-\sin \beta \cos \gamma &\sin \gamma &0\\\sin \beta \sin \gamma &\cos \gamma &0\\\cos \beta &0&1\\\end{pmatrix}}{\begin{pmatrix}{\dot {\alpha }}\\{\dot {\beta }}\\{\dot {\gamma }}\\\end{pmatrix}}.}
The vector ω = ( ω x , ω y , ω z ) {\displaystyle {\boldsymbol {\omega }}=(\omega _{x},\omega _{y},\omega _{z})} on the left hand side contains the components of the angular velocity of the rotor expressed with respect to the body-fixed frame. The angular velocity satisfies equations of motion known as Euler's equations (with zero applied torque, since by assumption the rotor is in field-free space). It can be shown that ω {\displaystyle {\boldsymbol {\omega }}} is not the time derivative of any vector, in contrast to the usual definition of velocity . [ 2 ]
The dots over the time-dependent Euler angles on the right hand side indicate time derivatives . Note that a different rotation matrix would result from a different choice of Euler angle convention used.
Backsubstitution of the expression of ω {\displaystyle {\boldsymbol {\omega }}} into T gives
the kinetic energy in Lagrange form (as a function of the time derivatives of the Euler angles). In matrix-vector notation, 2 T = ( α ˙ β ˙ γ ˙ ) g ( α ˙ β ˙ γ ˙ ) , {\displaystyle 2T={\begin{pmatrix}{\dot {\alpha }}&{\dot {\beta }}&{\dot {\gamma }}\end{pmatrix}}\;\mathbf {g} \;{\begin{pmatrix}{\dot {\alpha }}\\{\dot {\beta }}\\{\dot {\gamma }}\\\end{pmatrix}},} where g {\displaystyle \mathbf {g} } is the metric tensor expressed in Euler angles—a non-orthogonal system of curvilinear coordinates —
g = ( I 1 sin 2 β cos 2 γ + I 2 sin 2 β sin 2 γ + I 3 cos 2 β ( I 2 − I 1 ) sin β sin γ cos γ I 3 cos β ( I 2 − I 1 ) sin β sin γ cos γ I 1 sin 2 γ + I 2 cos 2 γ 0 I 3 cos β 0 I 3 ) . {\displaystyle \mathbf {g} ={\begin{pmatrix}I_{1}\sin ^{2}\beta \cos ^{2}\gamma +I_{2}\sin ^{2}\beta \sin ^{2}\gamma +I_{3}\cos ^{2}\beta &(I_{2}-I_{1})\sin \beta \sin \gamma \cos \gamma &I_{3}\cos \beta \\(I_{2}-I_{1})\sin \beta \sin \gamma \cos \gamma &I_{1}\sin ^{2}\gamma +I_{2}\cos ^{2}\gamma &0\\I_{3}\cos \beta &0&I_{3}\\\end{pmatrix}}.}
Often the kinetic energy is written as a function of the angular momentum L {\displaystyle \mathbf {L} } of the rigid rotor. With respect to the body-fixed frame it has the components L i {\displaystyle L_{i}} , and can be shown to be related to the angular velocity, L = I ( 0 ) ω or L i = ∂ T ∂ ω i , i = x , y , z . {\displaystyle \mathbf {L} =\mathbf {I} (0)\;{\boldsymbol {\omega }}\quad {\hbox{or}}\quad L_{i}={\frac {\partial T}{\partial \omega _{i}}},\;\;i=x,\,y,\,z.} This angular momentum is a conserved (time-independent) quantity if viewed from a stationary space-fixed frame. Since the body-fixed frame moves (depends on time) the components L i {\displaystyle L_{i}} are not time independent. If we were to represent L {\displaystyle \mathbf {L} } with respect to the stationary space-fixed frame, we would
find time independent expressions for its components.
The kinetic energy is expressed in terms of the angular momentum by T = 1 2 [ L x 2 I 1 + L y 2 I 2 + L z 2 I 3 ] . {\displaystyle T={\frac {1}{2}}\left[{\frac {L_{x}^{2}}{I_{1}}}+{\frac {L_{y}^{2}}{I_{2}}}+{\frac {L_{z}^{2}}{I_{3}}}\right].}
The Hamilton form of the kinetic energy is written in terms of generalized momenta ( p α p β p γ ) = d e f ( ∂ T / ∂ α ˙ ∂ T / ∂ β ˙ ∂ T / ∂ γ ˙ ) = g ( α ˙ β ˙ γ ˙ ) , {\displaystyle {\begin{pmatrix}p_{\alpha }\\p_{\beta }\\p_{\gamma }\\\end{pmatrix}}\mathrel {\stackrel {\mathrm {def} }{=}} {\begin{pmatrix}\partial T/{\partial {\dot {\alpha }}}\\\partial T/{\partial {\dot {\beta }}}\\\partial T/{\partial {\dot {\gamma }}}\\\end{pmatrix}}=\mathbf {g} {\begin{pmatrix}\;\,{\dot {\alpha }}\\{\dot {\beta }}\\{\dot {\gamma }}\\\end{pmatrix}},} where it is used that the g {\displaystyle \mathbf {g} } is symmetric. In Hamilton form the kinetic energy is, 2 T = ( p α p β p γ ) g − 1 ( p α p β p γ ) , {\displaystyle 2T={\begin{pmatrix}p_{\alpha }&p_{\beta }&p_{\gamma }\end{pmatrix}}\;\mathbf {g} ^{-1}\;{\begin{pmatrix}p_{\alpha }\\p_{\beta }\\p_{\gamma }\\\end{pmatrix}},} with the inverse metric tensor given by sin 2 β g − 1 = ( 1 I 1 cos 2 γ + 1 I 2 sin 2 γ ( 1 I 2 − 1 I 1 ) sin β sin γ cos γ − 1 I 1 cos β cos 2 γ − 1 I 2 cos β sin 2 γ ( 1 I 2 − 1 I 1 ) sin β sin γ cos γ 1 I 1 sin 2 β sin 2 γ + 1 I 2 sin 2 β cos 2 γ ( 1 I 1 − 1 I 2 ) sin β cos β sin γ cos γ − 1 I 1 cos β cos 2 γ − 1 I 2 cos β sin 2 γ ( 1 I 1 − 1 I 2 ) sin β cos β sin γ cos γ 1 I 1 cos 2 β cos 2 γ + 1 I 2 cos 2 β sin 2 γ + 1 I 3 sin 2 β ) . {\displaystyle \sin ^{2}\beta \;\mathbf {g} ^{-1}={\begin{pmatrix}{\frac {1}{I_{1}}}\cos ^{2}\gamma +{\frac {1}{I_{2}}}\sin ^{2}\gamma &\left({\frac {1}{I_{2}}}-{\frac {1}{I_{1}}}\right)\sin \beta \sin \gamma \cos \gamma &-{\frac {1}{I_{1}}}\cos \beta \cos ^{2}\gamma -{\frac {1}{I_{2}}}\cos \beta \sin ^{2}\gamma \\\left({\frac {1}{I_{2}}}-{\frac {1}{I_{1}}}\right)\sin \beta \sin \gamma \cos \gamma &{\frac {1}{I_{1}}}\sin ^{2}\beta \sin ^{2}\gamma +{\frac {1}{I_{2}}}\sin ^{2}\beta \cos ^{2}\gamma &\left({\frac {1}{I_{1}}}-{\frac {1}{I_{2}}}\right)\sin \beta \cos \beta \sin \gamma \cos \gamma \\-{\frac {1}{I_{1}}}\cos \beta \cos ^{2}\gamma -{\frac {1}{I_{2}}}\cos \beta \sin ^{2}\gamma &\left({\frac {1}{I_{1}}}-{\frac {1}{I_{2}}}\right)\sin \beta \cos \beta \sin \gamma \cos \gamma &{\frac {1}{I_{1}}}\cos ^{2}\beta \cos ^{2}\gamma +{\frac {1}{I_{2}}}\cos ^{2}\beta \sin ^{2}\gamma +{\frac {1}{I_{3}}}\sin ^{2}\beta \\\end{pmatrix}}.}
This inverse tensor is needed to obtain the Laplace-Beltrami operator , which (multiplied by − ℏ 2 {\displaystyle -\hbar ^{2}} ) gives the quantum mechanical energy operator of the rigid rotor.
The classical Hamiltonian given above can be rewritten to the following expression, which is needed in the phase integral arising in the classical statistical mechanics of rigid rotors, T = 1 2 I 1 sin 2 β ( ( p α − p γ cos β ) cos γ − p β sin β sin γ ) 2 + 1 2 I 2 sin 2 β ( ( p α − p γ cos β ) sin γ + p β sin β cos γ ) 2 + p γ 2 2 I 3 . {\displaystyle {\begin{aligned}T={}&{\frac {1}{2I_{1}\sin ^{2}\beta }}\left(\left(p_{\alpha }-p_{\gamma }\cos \beta \right)\cos \gamma -p_{\beta }\sin \beta \sin \gamma \right)^{2}+{}\\&{\frac {1}{2I_{2}\sin ^{2}\beta }}\left(\left(p_{\alpha }-p_{\gamma }\cos \beta \right)\sin \gamma +p_{\beta }\sin \beta \cos \gamma \right)^{2}+{\frac {p_{\gamma }^{2}}{2I_{3}}}.\\\end{aligned}}}
As usual quantization is performed by the replacement of the generalized momenta by operators that give first derivatives with respect to its canonically conjugate variables (positions). Thus, p α ⟶ − i ℏ ∂ ∂ α {\displaystyle p_{\alpha }\longrightarrow -i\hbar {\frac {\partial }{\partial \alpha }}} and similarly for p β {\displaystyle p_{\beta }} and p γ {\displaystyle p_{\gamma }} . It is remarkable that this rule replaces the fairly complicated function p α {\displaystyle p_{\alpha }} of all three Euler angles, time derivatives of Euler angles, and inertia moments (characterizing the rigid rotor) by a simple differential operator that does not depend on time or inertia moments and differentiates to one Euler angle only.
The quantization rule is sufficient to obtain the operators that correspond with the classical angular momenta. There are two kinds: space-fixed and body-fixed
angular momentum operators. Both are vector operators, i.e., both have three components that transform as vector components among themselves upon rotation of the space-fixed and the body-fixed frame, respectively. The explicit form of the rigid rotor angular momentum operators is given here (but beware, they must be multiplied with ℏ {\displaystyle \hbar } ). The body-fixed angular momentum operators are written as P ^ i {\displaystyle {\hat {\mathcal {P}}}_{i}} . They satisfy anomalous commutation relations .
The quantization rule is not sufficient to obtain the kinetic energy operator from the classical Hamiltonian. Since classically p β {\displaystyle p_{\beta }} commutes with cos β {\displaystyle \cos \beta } and sin β {\displaystyle \sin \beta } and the inverses of these functions, the position of these trigonometric functions in the classical Hamiltonian is arbitrary. After
quantization the commutation does no longer hold and the order of operators and functions in the Hamiltonian (energy operator) becomes a point of concern. Podolsky [ 1 ] proposed in 1928 that the Laplace-Beltrami operator (times − 1 2 ℏ 2 {\displaystyle -{\tfrac {1}{2}}\hbar ^{2}} ) has the appropriate form for the quantum mechanical kinetic energy operator. This operator has the general form (summation convention: sum over repeated indices—in this case over the three Euler angles q 1 , q 2 , q 3 ≡ α , β , γ {\displaystyle q^{1},\,q^{2},\,q^{3}\equiv \alpha ,\,\beta ,\,\gamma } ):
H ^ = − ℏ 2 2 | g | − 1 2 ∂ ∂ q i | g | 1 2 g i j ∂ ∂ q j , {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2}}\;|g|^{-{\frac {1}{2}}}{\frac {\partial }{\partial q^{i}}}|g|^{\frac {1}{2}}g^{ij}{\frac {\partial }{\partial q^{j}}},} where | g | {\displaystyle |g|} is the determinant of the g-tensor: | g | = I 1 I 2 I 3 sin 2 β and g i j = ( g − 1 ) i j . {\displaystyle |g|=I_{1}\,I_{2}\,I_{3}\,\sin ^{2}\beta \quad {\hbox{and}}\quad g^{ij}=\left(\mathbf {g} ^{-1}\right)_{ij}.} Given the inverse of the metric tensor above, the explicit form of the kinetic energy operator in terms of Euler angles follows by simple substitution. (Note: The corresponding eigenvalue equation gives the Schrödinger equation for the rigid rotor in the form that it was solved for the first time by Kronig and Rabi [ 3 ] (for the special case of the symmetric rotor). This is one of the few cases where the Schrödinger equation can be solved analytically. All these cases were solved within a year of the formulation of the Schrödinger equation.)
Nowadays it is common to proceed as follows. It can be shown that H ^ {\displaystyle {\hat {H}}} can be expressed in body-fixed angular momentum operators (in this proof one must carefully commute differential operators with trigonometric functions). The result has the same appearance as the classical formula expressed in body-fixed coordinates, H ^ = 1 2 [ P x 2 I 1 + P y 2 I 2 + P z 2 I 3 ] . {\displaystyle {\hat {H}}={\frac {1}{2}}\left[{\frac {{\mathcal {P}}_{x}^{2}}{I_{1}}}+{\frac {{\mathcal {P}}_{y}^{2}}{I_{2}}}+{\frac {{\mathcal {P}}_{z}^{2}}{I_{3}}}\right].} The action of the P ^ i {\displaystyle {\hat {\mathcal {P}}}_{i}} on the Wigner D-matrix is simple. In particular P 2 D m ′ m j ( α , β , γ ) ∗ = ℏ 2 j ( j + 1 ) D m ′ m j ( α , β , γ ) ∗ with P 2 = P x 2 + P y 2 + P z 2 , {\displaystyle {\mathcal {P}}^{2}\,D_{m'm}^{j}(\alpha ,\beta ,\gamma )^{*}=\hbar ^{2}j(j+1)D_{m'm}^{j}(\alpha ,\beta ,\gamma )^{*}\quad {\hbox{with}}\quad {\mathcal {P}}^{2}={\mathcal {P}}_{x}^{2}+{\mathcal {P}}_{y}^{2}+{\mathcal {P}}_{z}^{2},} so that the Schrödinger equation for the spherical rotor ( I = I 1 = I 2 = I 3 {\displaystyle I=I_{1}=I_{2}=I_{3}} ) is solved with the ( 2 j + 1 ) 2 {\displaystyle (2j+1)^{2}} degenerate energy equal to ℏ 2 j ( j + 1 ) 2 I {\displaystyle {\tfrac {\hbar ^{2}j(j+1)}{2I}}} .
The symmetric top (= symmetric rotor) is characterized by I 1 = I 2 {\displaystyle I_{1}=I_{2}} . It is a prolate (cigar shaped) top if I 3 < I 1 = I 2 {\displaystyle I_{3}<I_{1}=I_{2}} . In the latter case we write the Hamiltonian as H ^ = 1 2 [ P 2 I 1 + P z 2 ( 1 I 3 − 1 I 1 ) ] , {\displaystyle {\hat {H}}={\frac {1}{2}}\left[{\frac {{\mathcal {P}}^{2}}{I_{1}}}+{\mathcal {P}}_{z}^{2}\left({\frac {1}{I_{3}}}-{\frac {1}{I_{1}}}\right)\right],} and use that P z 2 D m k j ( α , β , γ ) ∗ = ℏ 2 k 2 D m k j ( α , β , γ ) ∗ . {\displaystyle {\mathcal {P}}_{z}^{2}\,D_{mk}^{j}(\alpha ,\beta ,\gamma )^{*}=\hbar ^{2}k^{2}\,D_{mk}^{j}(\alpha ,\beta ,\gamma )^{*}.} Hence H ^ D m k j ( α , β , γ ) ∗ = E j k D m k j ( α , β , γ ) ∗ {\displaystyle {\hat {H}}\,D_{mk}^{j}(\alpha ,\beta ,\gamma )^{*}=E_{jk}D_{mk}^{j}(\alpha ,\beta ,\gamma )^{*}} with
1 ℏ 2 E j k = j ( j + 1 ) 2 I 1 + k 2 ( 1 2 I 3 − 1 2 I 1 ) . {\displaystyle {\frac {1}{\hbar ^{2}}}E_{jk}={\frac {j(j+1)}{2I_{1}}}+k^{2}\left({\frac {1}{2I_{3}}}-{\frac {1}{2I_{1}}}\right).}
The eigenvalue E j 0 {\displaystyle E_{j0}} is 2 j + 1 {\displaystyle 2j+1} -fold degenerate, for all eigenfunctions with m = − j , − j + 1 , … , j {\displaystyle m=-j,-j+1,\dots ,j} have the same eigenvalue. The energies with |k| > 0 are 2 ( 2 j + 1 ) {\displaystyle 2(2j+1)} -fold degenerate. This exact solution of the Schrödinger equation of the symmetric top was first found in 1927. [ 3 ]
The asymmetric top problem ( I 1 ≠ I 2 ≠ I 3 {\displaystyle I_{1}\neq I_{2}\neq I_{3}} ) is not soluble analytically.
For a long time, molecular rotations could not be directly observed experimentally. Only measurement techniques with atomic resolution made it possible to detect the rotation of a single molecule. [ 4 ] [ 5 ] At low temperatures, the rotations of molecules (or part thereof) can be frozen. This could be directly visualized by scanning tunneling microscopy , i.e., the stabilization could be explained at higher temperatures by the rotational entropy. [ 5 ] The direct observation of rotational excitation at single molecule level was achieved recently using inelastic electron tunneling spectroscopy with the scanning tunneling microscope. The rotational excitation of molecular hydrogen and its isotopes were detected. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Rigid_rotor |
Rigid unit modes ( RUMs ) represent a class of lattice vibrations or phonons that exist in network materials such as quartz , cristobalite or zirconium tungstate . Network materials can be described as three-dimensional networks of polyhedral groups of atoms such as SiO 4 tetrahedra or TiO 6 octahedra. A RUM is a lattice vibration in which the polyhedra are able to move, by translation and/or rotation, without distorting. RUMs in crystalline materials are the counterparts of floppy modes in glasses, as introduced by Jim Phillips and Mike Thorpe .
The idea of rigid unit modes was developed for crystalline materials to enable an understanding of the origin of displacive phase transitions in materials such as silicates , which can be described as infinite three-dimensional networks of corner-lined SiO 4 and AlO 4 tetrahedra . The idea was that rigid unit modes could act as the soft modes for displacive phase transitions .
The original work in silicates showed that many of the phase transitions in silicates could be understood in terms of soft modes that are RUMs.
After the original work on displacive phase transitions , the RUM model was also applied to understanding the nature of the disordered high-temperature phases of materials such as cristobalite , the dynamics and localised structural distortions in zeolites , and negative thermal expansion .
The simplest way to understand the origin of RUMs is to consider the balance between the numbers of constraints and degrees of freedom of the network, an engineering analysis that dates back to James Clerk Maxwell and which was introduced to amorphous materials by Jim Phillips and Mike Thorpe. If the number of constraints exceeds the number of degrees of freedom, the structure will be rigid. On the other hand, if the number of degrees of freedom exceeds the number of constraints, the structure will be floppy.
For a structure that consists of corner-linked tetrahedra (such as the SiO 4 tetrahedra in silica , SiO 2 ) we can count the numbers of constraints and degrees of freedom as follows. For a given tetrahedron, the position of any corner has to have its three spatial coordinates (x,y,z) match the spatial coordinates of the corresponding corner of a linked tetrahedron. Thus each corner has three constraints. These are shared by the two linked tetrahedra, so contribute 1.5 constraints to each tetrahedron. There are 4 corners, so we have a total of 6 constraints per tetrahedron. A rigid three-dimensional object has 6 degrees of freedom, 3 translations and 3 rotations. [ citation needed ] Thus there is an exact balance between the numbers of constraints and degrees of freedom.
(Note that we can get an identical result by considering the atoms to be the basic units. There are 5 atoms in the structural tetrahedron, but 4 of there are shared by two tetrahedra, so that there are 3 + 4*3/2 = 9 degrees of freedom per tetrahedron. The number of constraints to hold together such a tetrahedron is 9 (4 distances and 5 angles)).
What this balance means is that a structure composed of structural tetrahedra joined at corners is exactly on the border between being rigid and floppy. What appears to happen is that symmetry reduces the number of constraints so that structures such as quartz and cristobalite are slightly floppy and thus support some RUMs.
The above analysis can be applied to any network structure composed of polyhedral groups of atoms. One example is the perovskite family of structures, which consist of corner-linked BX 6 octahedra such as TiO 6 or ZrO 6 . A simple counting analysis would in fact suggest that such structures are rigid, but in the ideal cubic phase symmetry allows some degree of flexibility. Zirconium tungstate , the archetypal material showing negative thermal expansion , contains ZrO 6 octahedra and WO 4 tetrahedra, with one of the corners of each WO 4 tetrahedra having no linkage. The counting analysis shows that, like silica, zirconium tungstate has an exact balance between the numbers of constraints and degrees of freedom, and further analysis has shown the existence of RUMs in this material. | https://en.wikipedia.org/wiki/Rigid_unit_modes |
Rigidity theory , or topological constraint theory, is a tool for predicting properties of complex networks (such as glasses ) based on their composition. It was introduced by James Charles Phillips in 1979 [ 1 ] and 1981, [ 2 ] and refined by Michael Thorpe in 1983. [ 3 ] Inspired by the study of the stability of mechanical trusses as pioneered by James Clerk Maxwell , [ 4 ] and by the seminal work on glass structure done by William Houlder Zachariasen , [ 5 ] this theory reduces complex molecular networks to nodes (atoms, molecules, proteins, etc.) constrained by rods (chemical constraints), thus filtering out microscopic details that ultimately don't affect macroscopic properties. An equivalent theory was developed by P. K. Gupta and A. R. Cooper in 1990, where rather than nodes representing atoms, they represented unit polytopes . [ 6 ] An example of this would be the SiO tetrahedra in pure glassy silica . This style of analysis has applications in biology and chemistry, such as understanding adaptability in protein-protein interaction networks. [ 7 ] Rigidity theory applied to the molecular networks arising from phenotypical expression of certain diseases may provide insights regarding their structure and function.
In molecular networks, atoms can be constrained by radial 2-body bond-stretching constraints, which keep interatomic distances fixed, and angular 3-body bond-bending constraints, which keep angles fixed around their average values. As stated by Maxwell's criterion, a mechanical truss is isostatic when the number of constraints equals the number of degrees of freedom of the nodes. In this case, the truss is optimally constrained, being rigid but free of stress . This criterion has been applied by Phillips to molecular networks, which are called flexible, stressed-rigid or isostatic when the number of constraints per atoms is respectively lower, higher or equal to 3, the number of degrees of freedom per atom in a three-dimensional system. [ 8 ] The same condition applies to random packing of spheres, which are isostatic at the jamming point.
Typically, the conditions for glass formation will be optimal if the network is isostatic, which is for example the case for pure silica . [ 9 ] Flexible systems show internal degrees of freedom, called floppy modes, [ 3 ] whereas stressed-rigid ones are complexity locked by the high number of constraints and tend to crystallize instead of forming glass during a quick quenching.
The conditions for isostaticity can be derived by looking at the internal degrees of freedom of a general 3D network. For N {\displaystyle N} nodes, N c {\displaystyle N_{c}} constraints, and M e q {\displaystyle M_{eq}} equations of equilibrium, the number of degrees of freedom is
The node term picks up a factor of 3 due to there being translational degrees of freedom in the x , y , and z directions. By similar reasoning, M e q = 6 {\displaystyle M_{eq}=6} in 3D, as there is one equation of equilibrium for translational and rotational modes in each dimension. This yields
This can be applied to each node in the system by normalizing by the number of nodes
where f = F N {\displaystyle f={\frac {F}{N}}} , n c = N c N {\displaystyle n_{c}={\frac {N_{c}}{N}}} , and the last term has been dropped since for atomistic systems 6 ≪ N {\displaystyle 6\ll N} . Isostatic conditions are achieved when f = 0 {\displaystyle f=0} , yielding the number of constraints per atom in the isostatic condition of n c = 3 {\displaystyle n_{c}=3} .
An alternative derivation is based on analyzing the shear modulus G {\displaystyle G} of the 3D network or solid structure. The isostatic condition, which represents the limit of mechanical stability, is equivalent to setting G = 0 {\displaystyle G=0} in a microscopic theory of elasticity that provides G {\displaystyle G} as a function of the internal coordination number of nodes and of the number of degrees of freedom. The problem has been solved by Alessio Zaccone and E. Scossa-Romano in 2011, who derived the analytical formula for the shear modulus of a 3D network of central-force springs (bond-stretching constraints): G = ( 1 / 30 ) κ R 0 2 ( z − 2 d ) {\displaystyle G=(1/30)\kappa R_{0}^{2}(z-2d)} . [ 10 ] Here, κ {\displaystyle \kappa } is the spring constant, R 0 {\displaystyle R_{0}} is the distance between two nearest-neighbor nodes, z {\displaystyle z} the average coordination number of the network (note that here z N / 2 ≡ N c {\displaystyle zN/2\equiv N_{c}} and z / 2 ≡ n c {\displaystyle z/2\equiv n_{c}} ), and 2 d = 6 {\displaystyle 2d=6} in 3D. A similar formula has been derived for 2D networks where the prefactor is 1 / 18 {\displaystyle 1/18} instead of 1 / 30 {\displaystyle 1/30} .
Hence, based on the Zaccone–Scossa-Romano expression for G {\displaystyle G} , upon setting G = 0 {\displaystyle G=0} , one obtains z = 2 d = 6 {\displaystyle z=2d=6} , or equivalently in different notation, n c = 3 {\displaystyle n_{c}=3} , which defines the Maxwell isostatic condition.
A similar analysis can be done for 3D networks with bond-bending interactions (on top of bond-stretching), which leads to the isostatic condition z = 2.4 {\displaystyle z=2.4} , with a lower threshold due to the angular constraints imposed by bond-bending. [ 11 ]
Rigidity theory allows the prediction of optimal isostatic compositions, as well as the composition dependence of glass properties, by a simple enumeration of constraints. [ 12 ] These glass properties include, but are not limited to, elastic modulus , shear modulus , bulk modulus , density, Poisson's ratio , coefficient of thermal expansion, hardness, [ 13 ] and toughness . In some systems, due to the difficulty of directly enumerating constraints by hand and knowing all system information a priori , the theory is often employed in conjunction with computational methods in materials science such as molecular dynamics (MD). Notably, the theory played a major role in the development of Gorilla Glass 3 . [ 14 ] Extended to glasses at finite temperature [ 15 ] and finite pressure, [ 16 ] rigidity theory has been used to predict glass transition temperature, viscosity and mechanical properties. [ 8 ] It was also applied to granular materials [ 17 ] and proteins . [ 18 ]
In the context of soft glasses, rigidity theory has been used by Alessio Zaccone and Eugene Terentjev to predict the glass transition temperature of polymers and to provide a molecular-level derivation and interpretation of the Flory–Fox equation . [ 19 ] The Zaccone–Terentjev theory also provides an expression for the shear modulus of glassy polymers as a function of temperature which is in quantitative agreement with experimental data, and is able to describe the many orders of magnitude drop of the shear modulus upon approaching the glass transition from below. [ 19 ]
In 2001, Boolchand and coworkers found that the isostatic compositions in glassy alloys—predicted by rigidity theory—exist not just at a single threshold composition; rather, in many systems it spans a small, well-defined range of compositions intermediate to the flexible (under-constrained) and stressed-rigid (over-constrained) domains. [ 20 ] This window of optimally constrained glasses is thus referred to as the intermediate phase or the reversibility window , as the glass formation is supposed to be reversible, with minimal hysteresis, inside the window. [ 20 ] Its existence has been attributed to the glassy network consisting almost exclusively of a varying population of isostatic molecular structures. [ 16 ] [ 21 ] The existence of the intermediate phase remains a controversial, but stimulating topic in glass science. | https://en.wikipedia.org/wiki/Rigidity_theory_(physics) |
Rigour ( British English ) or rigor ( American English ; see spelling differences ) describes a condition of stiffness or strictness. [ 1 ] These constraints may be environmentally imposed, such as "the rigours of famine "; logically imposed, such as mathematical proofs which must maintain consistent answers; or socially imposed, such as the process of defining ethics and law .
"Rigour" comes to English through old French (13th c., Modern French rigueur ) meaning "stiffness", which itself is based on the Latin rigorem (nominative rigor ) "numbness, stiffness, hardness, firmness; roughness, rudeness", from the verb rigere "to be stiff". [ 2 ] The noun was frequently used to describe a condition of strictness or stiffness, which arises from a situation or constraint either chosen or experienced passively. For example, the title of the book Theologia Moralis Inter Rigorem et Laxitatem Medi roughly translates as "mediating theological morality between rigour and laxness". The book details, for the clergy , situations in which they are obligated to follow church law exactly, and in which situations they can be more forgiving yet still considered moral. [ 3 ] Rigor mortis translates directly as the stiffness ( rigor ) of death ( mortis ), again describing a condition which arises from a certain constraint (death).
Intellectual rigour is a process of thought which is consistent, does not contain self-contradiction, and takes into account the entire scope of available knowledge on the topic. It actively avoids logical fallacy . Furthermore, it requires a sceptical assessment of the available knowledge. If a topic or case is dealt with in a rigorous way, it typically means that it is dealt with in a comprehensive, thorough and complete way, leaving no room for inconsistencies. [ 4 ]
Scholarly method describes the different approaches or methods which may be taken to apply intellectual rigour on an institutional level to ensure the quality of information published. An example of intellectual rigour assisted by a methodical approach is the scientific method , in which a person will produce a hypothesis based on what they believe to be true, then construct experiments in order to prove that hypothesis wrong. This method, when followed correctly, helps to prevent against circular reasoning and other fallacies which frequently plague conclusions within academia. Other disciplines, such as philosophy and mathematics, employ their own structures to ensure intellectual rigour. Each method requires close attention to criteria for logical consistency, as well as to all relevant evidence and possible differences of interpretation. At an institutional level, peer review is used to validate intellectual rigour.
Intellectual rigour is a subset of intellectual honesty —a practice of thought in which ones convictions are kept in proportion to valid evidence . [ 5 ] Intellectual honesty is an unbiased approach to the acquisition, analysis, and transmission of ideas. A person is being intellectually honest when he or she, knowing the truth, states that truth, regardless of outside social/environmental pressures. It is possible to doubt whether complete intellectual honesty exists—on the grounds that no one can entirely master his or her own presuppositions—without doubting that certain kinds of intellectual rigour are potentially available. The distinction certainly matters greatly in debate , if one wishes to say that an argument is flawed in its premises .
The setting for intellectual rigour does tend to assume a principled position from which to advance or argue. An opportunistic tendency to use any argument at hand is not very rigorous, although very common in politics , for example. Arguing one way one day, and another later, can be defended by casuistry , i.e. by saying the cases are different.
In the legal context, for practical purposes, the facts of cases do always differ. Case law can therefore be at odds with a principled approach; and intellectual rigour can seem to be defeated. This defines a judge 's problem with uncodified law . Codified law poses a different problem, of interpretation and adaptation of definite principles without losing the point; here applying the letter of the law, with all due rigour, may on occasion seem to undermine the principled approach .
Mathematical rigour can apply to methods of mathematical proof and to methods of mathematical practice (thus relating to other interpretations of rigour).
Mathematical rigour is often cited as a kind of gold standard for mathematical proof . Its history traces back to Greek mathematics , especially to Euclid 's Elements . [ 6 ]
Until the 19th century, Euclid's Elements was seen as extremely rigorous and profound, but in the late 19th century, Hilbert (among others) realized that the work left certain assumptions implicit—assumptions that could not be proved from Euclid's Axioms (e.g. two circles can intersect in a point, some point is within an angle, and figures can be superimposed on each other). [ 7 ] This was contrary to the idea of rigorous proof where all assumptions need to be stated and nothing can be left implicit. New foundations were developed using the axiomatic method to address this gap in rigour found in the Elements (e.g., Hilbert's axioms , Birkhoff's axioms , Tarski's axioms ).
During the 19th century, the term "rigorous" began to be used to describe increasing levels of abstraction when dealing with calculus which eventually became known as mathematical analysis . The works of Cauchy added rigour to the older works of Euler and Gauss . The works of Riemann added rigour to the works of Cauchy. The works of Weierstrass added rigour to the works of Riemann, eventually culminating in the arithmetization of analysis . Starting in the 1870s, the term gradually came to be associated with Cantorian set theory .
Mathematical rigour can be modelled as amenability to algorithmic proof checking . Indeed, with the aid of computers, it is possible to check some proofs mechanically. [ 8 ] Formal rigour is the introduction of high degrees of completeness by means of a formal language where such proofs can be codified using set theories such as ZFC (see automated theorem proving ).
Published mathematical arguments have to conform to a standard of rigour, but are written in a mixture of symbolic and natural language. In this sense, written mathematical discourse is a prototype of formal proof. Often, a written proof is accepted as rigorous although it might not be formalised as yet. The reason often cited by mathematicians for writing informally is that completely formal proofs tend to be longer and more unwieldy, thereby obscuring the line of argument. An argument that appears obvious to human intuition may in fact require fairly long formal derivations from the axioms. A particularly well-known example is how in Principia Mathematica , Whitehead and Russell have to expend a number of lines of rather opaque effort in order to establish that, indeed, it is sensical to say: "1+1=2". In short, comprehensibility is favoured over formality in written discourse.
Still, advocates of automated theorem provers may argue that the formalisation of proof does improve the mathematical rigour by disclosing gaps or flaws in informal written discourse. When the correctness of a proof is disputed, formalisation is a way to settle such a dispute as it helps to reduce misinterpretations or ambiguity.
The role of mathematical rigour in relation to physics is twofold:
Both aspects of mathematical rigour in physics have attracted considerable attention in philosophy of science (see, for example, ref. [ 10 ] and ref. [ 11 ] and the works quoted therein).
Rigour in the classroom is a hotly debated topic amongst educators. Even the semantic meaning of the word is contested.
Generally speaking, classroom rigour consists of multi-faceted, challenging instruction and correct placement of the student. Students excelling in formal operational thought tend to excel in classes for gifted students. [ citation needed ] Students who have not reached that final stage of cognitive development , according to developmental psychologist Jean Piaget , can build upon those skills with the help of a properly trained teacher.
Rigour in the classroom is commonly called "rigorous instruction". It is instruction that requires students to construct meaning for themselves, impose structure on information, integrate individual skills into processes, operate within but at the outer edge of their abilities, and apply what they learn in more than one context and to unpredictable situations. [ 12 ] | https://en.wikipedia.org/wiki/Rigour |
The Riley oxidation is a selenium dioxide -mediated oxidation of methylene groups adjacent to carbonyls . It was first reported by Harry Lister Riley and co-workers in 1932. [ 1 ] In the decade that ensued, selenium -mediated oxidation rapidly expanded in use, and in 1939, Andre Guillemonat and co-workers disclosed the selenium dioxide-mediated oxidation of olefins at the allylic position. [ 2 ] Today, selenium-dioxide-mediated oxidation of methylene groups to alpha ketones and at the allylic position of olefins is known as the Riley Oxidation. [ 3 ]
The mechanism of oxidation of −CH 2 C(O)R groups by SeO 2 has been well investigated. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The oxidation of carbonyl alpha methylene positions begins with attack by the enol tautomer at the electrophilic selenium center. Following rearrangement and loss of water, a second equivalent of water attacks the alpha position. Red amorphous selenium is liberated in the final step to give the 1,2-dicarbonyl product. [ 8 ] [ 9 ] : 4331
Allylic oxidation using selenium-dioxide proceeds via an ene reaction at the electrophilic selenium center. A 2,3-sigmatropic shift , proceeding through an envelope-like transition state , gives the allylselenite ester, which upon hydrolysis gives the allylic alcohol. The ( E )- orientation about the double bond, a consequence of the envelope-like transition state, is observed in the penultimate ester formation, is retained during the hydrolysis step giving the ( E )- allylic alcohol product. [ 4 ] [ 10 ]
The Riley Oxidation is amenable to a variety of carbonyl and olefinic systems with a high degree of regiocontrol based on the substitution pattern of the given system.
Ketones with two available α-methylene positions react more quickly at the least hindered position.: [ 1 ]
Allylic oxidation can be predicted by the substitution pattern on the olefin. In the case of 1,2-disubstituted olefins, reaction rates follow CH > CH 2 > CH 3 :
Geminally-substituted olefins react in the same order of reaction rates as above: [ 2 ]
Trisubstituted alkenes experience reactivity at the more substituted end of the double bond. The order of reactivity follows that CH 2 > CH 3 > CH:
Due to the rearrangement of the double bond, terminal olefins tend to give primary allylic alcohols:
Cyclic alkenes prefer to undergo allylic oxidation within the ring, rather than the allylic position at the sidechain. In bridged ring systems, Bredt’s rule is followed and bridgehead positions are not oxidized:
In their strychnine total synthesis , R.B. Woodward and co-workers leveraged the Riley Oxidation to attain the trans-glyoxal. Epimerization of the alpha hydrogen led to cis -glyoxal, which spontaneously underwent cyclization with the secondary amine to yield dehydrostryninone. [ 11 ]
Selenium-dioxide mediated oxidation was used in the synthesis of the diterpenoid ryanodol. [ 12 ]
Selenium dioxide mediated allylic oxidation to access ingenol. [ 13 ] | https://en.wikipedia.org/wiki/Riley_oxidation |
Rimose is an adjective used to describe a surface that is cracked or fissured. [ 1 ]
The term is often used in describing crustose lichens . [ 1 ] A rimose surface of a lichen is sometimes contrasted to the surface being areolate . [ 1 ] Areolate is an extreme form of being rimose, where the cracks or fissures are so deep that they create island-like pieces called areoles , which look the "islands" of mud on the surface of a dry lake bed . [ 1 ] Rimose and areolate are contrasted with being verrucose , or "warty". [ 1 ] Verrucose surfaces have warty bumps which are distinct, but not separated by cracks. [ 1 ]
In mycology the term describes mushrooms whose caps crack in a radial pattern, as commonly found in the genera Inocybe and Inosperma . [ 2 ] | https://en.wikipedia.org/wiki/Rimose |
Rindler coordinates are a coordinate system used in the context of special relativity to describe the hyperbolic acceleration of a uniformly accelerating reference frame in flat spacetime. In relativistic physics the coordinates of a hyperbolically accelerated reference frame [ H 1 ] [ 1 ] constitute an important and useful coordinate chart representing part of flat Minkowski spacetime . [ 2 ] [ 3 ] [ 4 ] [ 5 ] In special relativity , a uniformly accelerating particle undergoes hyperbolic motion , for which a uniformly accelerating frame of reference in which it is at rest can be chosen as its proper reference frame . The phenomena in this hyperbolically accelerated frame can be compared to effects arising in a homogeneous gravitational field . For general overview of accelerations in flat spacetime, see Acceleration (special relativity) and Proper reference frame (flat spacetime) .
In this article, the speed of light is defined by c = 1 , the inertial coordinates are ( X , Y , Z , T ) , and the hyperbolic coordinates are ( x , y , z , t ) . These hyperbolic coordinates can be separated into two main variants depending on the accelerated observer's position: If the observer is located at time T = 0 at position X = 1/α (with α as the constant proper acceleration measured by a comoving accelerometer ), then the hyperbolic coordinates are often called Rindler coordinates with the corresponding Rindler metric . [ 6 ] If the observer is located at time T = 0 at position X = 0 , then the hyperbolic coordinates are sometimes called Møller coordinates [ 1 ] or Kottler–Møller coordinates with the corresponding Kottler–Møller metric . [ 7 ] An alternative chart often related to observers in hyperbolic motion is obtained using Radar coordinates [ 8 ] which are sometimes called Lass coordinates . [ 9 ] [ 10 ] Both the Kottler–Møller coordinates as well as Lass coordinates are denoted as Rindler coordinates as well. [ 11 ]
Regarding the history, such coordinates were introduced soon after the advent of special relativity, when they were studied (fully or partially) alongside the concept of hyperbolic motion: In relation to flat Minkowski spacetime by Albert Einstein (1907, 1912), [ H 2 ] Max Born (1909), [ H 1 ] Arnold Sommerfeld (1910), [ H 3 ] Max von Laue (1911), [ H 4 ] Hendrik Lorentz (1913), [ H 5 ] Friedrich Kottler (1914), [ H 6 ] Wolfgang Pauli (1921), [ H 7 ] Karl Bollert (1922), [ H 8 ] Stjepan Mohorovičić (1922), [ H 9 ] Georges Lemaître (1924), [ H 10 ] Einstein & Nathan Rosen (1935), [ H 2 ] Christian Møller (1943, 1952), [ H 11 ] Fritz Rohrlich (1963), [ 12 ] Harry Lass (1963), [ 13 ] and in relation to both flat and curved spacetime of general relativity by Wolfgang Rindler (1960, 1966). [ 14 ] [ 15 ] For details and sources, see § History .
The worldline of a body in hyperbolic motion having constant proper acceleration α {\displaystyle \alpha } in the X {\displaystyle X} -direction as a function of proper time τ {\displaystyle \tau } and rapidity α τ {\displaystyle \alpha \tau } can be given by [ 16 ]
where x = 1 / α {\displaystyle x=1/\alpha } is constant and α τ {\displaystyle \alpha \tau } is variable, with the worldline resembling the hyperbola X 2 − T 2 = x 2 {\displaystyle X^{2}-T^{2}=x^{2}} . Sommerfeld [ H 3 ] [ 17 ] showed that the equations can be reinterpreted by defining x {\displaystyle x} as variable and α τ {\displaystyle \alpha \tau } as constant, so that it represents the simultaneous "rest shape" of a body in hyperbolic motion measured by a comoving observer. By using the proper time of the observer as the time of the entire hyperbolically accelerated frame by setting τ = t {\displaystyle \tau =t} , the transformation formulas between the inertial coordinates and the hyperbolic coordinates are consequently: [ 6 ] [ 9 ]
with the inverse
Differentiated and inserted into the Minkowski metric
the metric in the hyperbolically accelerated frame follows as
These transformations define the Rindler observer as an observer that is "at rest" in Rindler coordinates, i.e., maintaining constant x , y , z , and only varying t as time passes. The coordinates are valid in the region 0 < X < ∞ , − X < T < X {\displaystyle 0<X<\infty ,\;-X<T<X} , which is often called the Rindler wedge , if α {\displaystyle \alpha } represents the proper acceleration (along the hyperbola x = 1 / α {\displaystyle x=1/\alpha } ) of the Rindler observer whose proper time is defined to be equal to Rindler coordinate time. To maintain this world line, the observer must accelerate with a constant proper acceleration, with Rindler observers closer to x = 0 {\displaystyle x=0} (the Rindler horizon ) having greater proper acceleration. All the Rindler observers are instantaneously at rest at time T = 0 {\displaystyle T=0} in the inertial frame, and at this time a Rindler observer with proper acceleration α i {\displaystyle \alpha _{i}} will be at position X = 1 / α i {\displaystyle X=1/\alpha _{i}} (really X = c 2 / α i {\displaystyle X=c^{2}/\alpha _{i}} , but we assume units where c = 1 {\displaystyle c=1} ), which is also that observer's constant distance from the Rindler horizon in Rindler coordinates. If all Rindler observers set their clocks to zero at T = 0 {\displaystyle T=0} , then when defining a Rindler coordinate system we have a choice of which Rindler observer's proper time will be equal to the coordinate time t {\displaystyle t} in Rindler coordinates, and this observer's proper acceleration defines the value of α {\displaystyle \alpha } above (for other Rindler observers at different distances from the Rindler horizon, the coordinate time will equal some constant multiple of their own proper time). [ 18 ] It is a common convention to define the Rindler coordinate system so that the Rindler observer whose proper time matches coordinate time is the one who has proper acceleration α = 1 {\displaystyle \alpha =1} , so that α {\displaystyle \alpha } can be eliminated from the equations.
The above equation has been simplified for c = 1 {\displaystyle c=1} . The unsimplified equation is more convenient for finding the Rindler Horizon distance, given an acceleration α {\displaystyle \alpha } .
The remainder of the article will follow the convention of setting both α = 1 {\displaystyle \alpha =1} and c = 1 {\displaystyle c=1} , so units for X {\displaystyle X} and x {\displaystyle x} will be 1 unit = c 2 / α = 1 {\displaystyle =c^{2}/\alpha =1} . Be mindful that setting α = 1 {\displaystyle \alpha =1} light-second/second 2 is very different from setting α = 1 {\displaystyle \alpha =1} light-year/year 2 . Even if we pick units where c = 1 {\displaystyle c=1} , the magnitude of the proper acceleration α {\displaystyle \alpha } will depend on our choice of units: for example, if we use units of light-years for distance, ( X {\displaystyle X} or x {\displaystyle x} ) and years for time, ( T {\displaystyle T} or t {\displaystyle t} ), this would mean α = 1 {\displaystyle \alpha =1} light year/year 2 , equal to about 9.5 meters/second 2 , while if we use units of light-seconds for distance, ( X {\displaystyle X} or x {\displaystyle x} ), and seconds for time, ( T {\displaystyle T} or t {\displaystyle t} ), this would mean α = 1 {\displaystyle \alpha =1} light-second/second 2 , or 299 792 458 meters/second 2 ).
A more general derivation of the transformation formulas is given, when the corresponding Fermi–Walker tetrad is formulated from which the Fermi coordinates or Proper coordinates can be derived. [ 19 ] Depending on the choice of origin of these coordinates, one can derive the metric, the time dilation between the time at the origin d t 0 {\displaystyle dt_{0}} and d t {\displaystyle dt} at point x {\displaystyle x} , and the coordinate light speed | d x | / | d t | {\displaystyle |dx|/|dt|} (this variable speed of light does not contradict special relativity, because it is only an artifact of the accelerated coordinates employed, while in inertial coordinates it remains constant). Instead of Fermi coordinates, also Radar coordinates can be used, which are obtained by determining the distance using light signals (see section Notions of distance ), by which metric, time dilation and speed of light do not depend on the coordinates anymore – in particular, the coordinate speed of light remains identical with the speed of light ( c = 1 ) {\displaystyle (c=1)} in inertial frames:
In the new chart ( 1a ) with c = 1 {\displaystyle c=1} and α = 1 {\displaystyle \alpha =1} , it is natural to take the coframe field
which has the dual frame field
This defines a local Lorentz frame in the tangent space at each event (in the region covered by our Rindler chart, namely the Rindler wedge). The integral curves of the timelike unit vector field e → 0 {\displaystyle {\vec {e}}_{0}} give a timelike congruence , consisting of the world lines of a family of observers called the Rindler observers . In the Rindler chart, these world lines appear as the vertical coordinate lines x = x 0 , y = y 0 , z = z 0 {\displaystyle x=x_{0},\;y=y_{0},\;z=z_{0}} . Using the coordinate transformation above, we find that these correspond to hyperbolic arcs in the original Cartesian chart.
As with any timelike congruence in any Lorentzian manifold, this congruence has a kinematic decomposition (see Raychaudhuri equation ). In this case, the expansion and vorticity of the congruence of Rindler observers vanish . The vanishing of the expansion tensor implies that each of our observers maintains constant distance to his neighbors . The vanishing of the vorticity tensor implies that the world lines of our observers are not twisting about each other; this is a kind of local absence of "swirling".
The acceleration vector of each observer is given by the covariant derivative
That is, each Rindler observer is accelerating in the ∂ x {\displaystyle \partial _{x}} direction. Individually speaking, each observer is in fact accelerating with constant magnitude in this direction, so their world lines are the Lorentzian analogs of circles, which are the curves of constant path curvature in the Euclidean geometry.
Because the Rindler observers are vorticity-free , they are also hypersurface orthogonal . The orthogonal spatial hyperslices are t = t 0 {\displaystyle t=t_{0}} ; these appear as horizontal half-planes in the Rindler chart and as half-planes through T = X = 0 {\displaystyle T=X=0} in the Cartesian chart (see the figure above). Setting d t = 0 {\displaystyle dt=0} in the line element, we see that these have the ordinary Euclidean geometry, d σ 2 = d x 2 + d y 2 + d z 2 , ∀ x > 0 , ∀ y , z {\displaystyle d\sigma ^{2}=dx^{2}+dy^{2}+dz^{2},\;\forall x>0,\forall y,z} . Thus, the spatial coordinates in the Rindler chart have a very simple interpretation consistent with the claim that the Rindler observers are mutually stationary. We will return to this rigidity property of the Rindler observers a bit later in this article.
Note that Rindler observers with smaller constant x coordinate are accelerating harder to keep up. This may seem surprising because in Newtonian physics, observers who maintain constant relative distance must share the same acceleration. But in relativistic physics, we see that the trailing endpoint of a rod which is accelerated by some external force (parallel to its symmetry axis) must accelerate a bit harder than the leading endpoint, or else it must ultimately break. This is a manifestation of Lorentz contraction . As the rod accelerates, its velocity increases and its length decreases. Since it is getting shorter, the back end must accelerate harder than the front. Another way to look at it is: the back end must achieve the same change in velocity in a shorter period of time. This leads to a differential equation showing that, at some distance, the acceleration of the trailing end diverges, resulting in the Rindler horizon .
This phenomenon is the basis of a well known "paradox", Bell's spaceship paradox . However, it is a simple consequence of relativistic kinematics. One way to see this is to observe that the magnitude of the acceleration vector is just the path curvature of the corresponding world line. But the world lines of our Rindler observers are the analogs of a family of concentric circles in the Euclidean plane, so we are simply dealing with the Lorentzian analog of a fact familiar to speed skaters: in a family of concentric circles, inner circles must bend faster (per unit arc length) than the outer ones .
It is worthwhile to also introduce an alternative frame, given in the Minkowski chart by the natural choice
Transforming these vector fields using the coordinate transformation given above, we find that in the Rindler chart (in the Rindler wedge) this frame becomes
Computing the kinematic decomposition of the timelike congruence defined by the timelike unit vector field f → 0 {\displaystyle {\vec {f}}_{0}} , we find that the expansion and vorticity again vanishes, and in addition the acceleration vector vanishes, ∇ f → 0 f → 0 = 0 {\displaystyle \nabla _{{\vec {f}}_{0}}{\vec {f}}_{0}=0} . In other words, this is a geodesic congruence ; the corresponding observers are in a state of inertial motion . In the original Cartesian chart, these observers, whom we will call Minkowski observers , are at rest.
In the Rindler chart, the world lines of the Minkowski observers appear as hyperbolic secant curves asymptotic to the coordinate plane x = 0 {\displaystyle x=0} . Specifically, in Rindler coordinates, the world line of the Minkowski observer passing through the event t = t 0 , x = x 0 , y = y 0 , z = z 0 {\displaystyle t=t_{0},\;x=x_{0},\;y=y_{0},\;z=z_{0}} is
where s {\displaystyle s} is the proper time of this Minkowski observer. Note that only a small portion of his history is covered by the Rindler chart. This shows explicitly why the Rindler chart is not geodesically complete ; timelike geodesics run outside the region covered by the chart in finite proper time. Of course, we already knew that the Rindler chart cannot be geodesically complete, because it covers only a portion of the original Cartesian chart, which is a geodesically complete chart.
In the case depicted in the figure, x 0 = 1 {\displaystyle x_{0}=1} and we have drawn (correctly scaled and boosted) the light cones at s ∈ { − 1 2 , 0 , 1 2 } {\displaystyle s\in \left\{-{\frac {1}{2}},\;0,\;{\frac {1}{2}}\right\}} .
The Rindler coordinate chart has a coordinate singularity at x = 0, where the metric tensor (expressed in the Rindler coordinates) has vanishing determinant . This happens because as x → 0 the acceleration of the Rindler observers diverges. As we can see from the figure illustrating the Rindler wedge, the locus x = 0 in the Rindler chart corresponds to the locus T 2 = X 2 , X > 0 in the Cartesian chart, which consists of two null half-planes, each ruled by a null geodesic congruence.
For the moment, we simply consider the Rindler horizon as the boundary of the Rindler coordinates. If we consider the set of accelerating observers who have a constant position in Rindler coordinates, none of them can ever receive light signals from events with T ≥ X (on the diagram, these would be events on or to the left of the line T = X which the upper red horizon lies along; these observers could however receive signals from events with T ≥ X if they stopped their acceleration and crossed this line themselves) nor could they have ever sent signals to events with T ≤ − X (events on or to the left of the line T = − X which the lower red horizon lies along; those events lie outside all future light cones of their past world line). Also, if we consider members of this set of accelerating observers closer and closer to the horizon, in the limit as the distance to the horizon approaches zero, the constant proper acceleration experienced by an observer at this distance (which would also be the G-force experienced by such an observer) would approach infinity. Both of these facts would also be true if we were considering a set of observers hovering outside the event horizon of a black hole , each observer hovering at a constant radius in Schwarzschild coordinates . In fact, in the close neighborhood of a black hole, the geometry close to the event horizon can be described in Rindler coordinates. Hawking radiation in the case of an accelerating frame is referred to as Unruh radiation . The connection is the equivalence of acceleration with gravitation.
The geodesic equations in the Rindler chart are easily obtained from the geodesic Lagrangian ; they are
Of course, in the original Cartesian chart, the geodesics appear as straight lines, so we could easily obtain them in the Rindler chart using our coordinate transformation. However, it is instructive to obtain and study them independently of the original chart, and we shall do so in this section.
From the first, third, and fourth we immediately obtain the first integrals
But from the line element we have ε = − x 2 t ˙ 2 + x ˙ 2 + y ˙ 2 + z ˙ 2 {\displaystyle \varepsilon =-x^{2}\,{\dot {t}}^{2}+{\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2}} where ε ∈ { − 1 , 0 , 1 } {\displaystyle \varepsilon \in \left\{-1,\,0,\,1\right\}} for timelike, null, and spacelike geodesics, respectively. This gives the fourth first integral, namely
This suffices to give the complete solution of the geodesic equations.
In the case of null geodesics , from E 2 x 2 − P 2 − Q 2 {\displaystyle {\frac {E^{2}}{x^{2}}}\,-\,P^{2}\,-\,Q^{2}} with nonzero E {\displaystyle E} , we see that the x coordinate ranges over the interval
The complete seven parameter family giving any null geodesic through any event in the Rindler wedge, is
Plotting the tracks of some representative null geodesics through a given event (that is, projecting to the hyperslice t = 0 {\displaystyle t=0} ), we obtain a picture which looks suspiciously like the family of all semicircles through a point and orthogonal to the Rindler horizon (See the figure). [ 27 ]
The fact that in the Rindler chart, the projections of null geodesics into any spatial hyperslice for the Rindler observers are simply semicircular arcs can be verified directly from the general solution just given, but there is a very simple way to see this. A static spacetime is one in which a vorticity-free timelike Killing vector field can be found. In this case, we have a uniquely defined family of (identical) spatial hyperslices orthogonal to the corresponding static observers (who need not be inertial observers). This allows us to define a new metric on any of these hyperslices which is conformally related to the original metric inherited from the spacetime, but with the property that geodesics in the new metric (note this is a Riemannian metric on a Riemannian three-manifold) are precisely the projections of the null geodesics of spacetime. This new metric is called the Fermat metric , and in a static spacetime endowed with a coordinate chart in which the line element has the form
the Fermat metric on t = 0 {\displaystyle t=0} is simply
(where the metric coeffients are understood to be evaluated at t = 0 {\displaystyle t=0} ).
In the Rindler chart, the timelike translation ∂ t {\displaystyle \partial _{t}} is such a Killing vector field, so this is a static spacetime (not surprisingly, since Minkowski spacetime is of course trivially a static vacuum solution of the Einstein field equation ). Therefore, we may immediately write down the Fermat metric for the Rindler observers:
But this is the well-known line element of hyperbolic three-space H 3 in the upper half space chart . This is closely analogous to the well known upper half plane chart for the hyperbolic plane H 2 , which is familiar to generations of complex analysis students in connection with conformal mapping problems (and much more), and many mathematically minded readers already know that the geodesics of H 2 in the upper half plane model are simply semicircles (orthogonal to the circle at infinity represented by the real axis).
Since the Rindler chart is a coordinate chart for Minkowski spacetime, we expect to find ten linearly independent Killing vector fields. Indeed, in the Cartesian chart we can readily find ten linearly independent Killing vector fields, generating respectively one parameter subgroups of time translation , three spatials, three rotations and three boosts. Together these generate the (proper isochronous) Poincaré group, the symmetry group of Minkowski spacetime.
However, it is instructive to write down and solve the Killing vector equations directly. We obtain four familiar looking Killing vector fields
(time translation, spatial translations orthogonal to the direction of acceleration, and spatial rotation orthogonal to the direction of acceleration) plus six more:
(where the signs are chosen consistently + or −). We leave it as an exercise to figure out how these are related to the standard generators; here we wish to point out that we must be able to obtain generators equivalent to ∂ T {\displaystyle \partial _{T}} in the Cartesian chart, yet the Rindler wedge is obviously not invariant under this translation. How can this be? The answer is that like anything defined by a system of partial differential equations on a smooth manifold, the Killing equation will in general have locally defined solutions, but these might not exist globally. That is, with suitable restrictions on the group parameter, a Killing flow can always be defined in a suitable local neighborhood , but the flow might not be well-defined globally . This has nothing to do with Lorentzian manifolds per se, since the same issue arises in the study of general smooth manifolds .
One of the many valuable lessons to be learned from a study of the Rindler chart is that there are in fact several distinct (but reasonable) notions of distance which can be used by the Rindler observers.
The first is the one we have tacitly employed above: the induced Riemannian metric on the spatial hyperslices t = t 0 {\displaystyle t=t_{0}} . We will call this the ruler distance since it corresponds to this induced Riemannian metric, but its operational meaning might not be immediately apparent.
From the standpoint of physical measurement, a more natural notion of distance between two world lines is the radar distance . This is computed by sending a null geodesic from the world line of our observer (event A) to the world line of some small object, whereupon it is reflected (event B) and returns to the observer (event C). The radar distance is then obtained by dividing the round trip travel time, as measured by an ideal clock carried by our observer.
(In Minkowski spacetime, fortunately, we can ignore the possibility of multiple null geodesic paths between two world lines, but in cosmological models and other applications [ which? ] things are not so simple. We should also caution against assuming that this notion of distance between two observers gives a notion which is symmetric under interchanging the observers.)
In particular, consider a pair of Rindler observers with coordinates x = x 0 , y = 0 , z = 0 {\displaystyle x=x_{0},\;y=0,\;z=0} and x = x 0 + h , y = 0 , z = 0 {\displaystyle x=x_{0}+h,\;y=0,\;z=0} respectively. (Note that the first of these, the trailing observer, is accelerating a bit harder, in order to keep up with the leading observer). Setting d y = d z = 0 {\displaystyle dy=dz=0} in the Rindler line element, we readily obtain the equation of null geodesics moving in the direction of acceleration:
Therefore, the radar distance between these two observers is given by
This is a bit smaller than the ruler distance, but for nearby observers the discrepancy is negligible.
A third possible notion of distance is this: our observer measures the angle subtended by a unit disk placed on some object (not a point object), as it appears from his location. We call this the optical diameter distance . Because of the simple character of null geodesics in Minkowski spacetime, we can readily determine the optical distance between our pair of Rindler observers (aligned with the direction of acceleration). From a sketch it should be plausible that the optical diameter distance scales like h + 1 x 0 + O ( h 3 ) {\textstyle h+{\frac {1}{x_{0}}}+O\left(h^{3}\right)} . Therefore, in the case of a trailing observer estimating distance to a leading observer (the case h > 0 {\displaystyle h>0} ), the optical distance is a bit larger than the ruler distance, which is a bit larger than the radar distance. The reader should now take a moment to consider the case of a leading observer estimating distance to a trailing observer.
There are other notions of distance, but the main point is clear: while the values of these various notions will in general disagree for a given pair of Rindler observers, they all agree that every pair of Rindler observers maintains constant distance . The fact that very nearby Rindler observers are mutually stationary follows from the fact, noted above, that the expansion tensor of the Rindler congruence vanishes identically. However, we have shown here that in various senses, this rigidity property holds at larger scales. This is truly a remarkable rigidity property, given the well-known fact that in relativistic physics, no rod can be accelerated rigidly (and no disk can be spun up rigidly ) — at least, not without sustaining inhomogeneous stresses. The easiest way to see this is to observe that in Newtonian physics, if we "kick" a rigid body, all elements of matter in the body will immediately change their state of motion. This is of course incompatible with the relativistic principle that no information having any physical effect can be transmitted faster than the speed of light.
It follows that if a rod is accelerated by some external force applied anywhere along its length, the elements of matter in various different places in the rod cannot all feel the same magnitude of acceleration if the rod is not to extend without bound and ultimately break. In other words, an accelerated rod which does not break must sustain stresses which vary along its length. Furthermore, in any thought experiment with time varying forces, whether we "kick" an object or try to accelerate it gradually, we cannot avoid the problem of avoiding mechanical models which are inconsistent with relativistic kinematics (because distant parts of the body respond too quickly to an applied force).
Returning to the question of the operational significance of the ruler distance, we see that this should be the distance which our observers will obtain should they very slowly pass from hand to hand a small ruler which is repeatedly set end to end. But justifying this interpretation in detail would require some kind of material model.
Rindler coordinates as described above can be generalized to curved spacetime, as Fermi normal coordinates . The generalization essentially involves constructing an appropriate orthonormal tetrad and then transporting it along the given trajectory using the Fermi–Walker transport rule. For details, see the paper by Ni and Zimmermann in the references below. Such a generalization actually enables one to study inertial and gravitational effects in an Earth-based laboratory, as well as the more interesting coupled inertial-gravitational effects.
Albert Einstein (1907) [ H 13 ] studied the effects within a uniformly accelerated frame, obtaining equations for coordinate dependent time dilation and speed of light equivalent to ( 2c ), and in order to make the formulas independent of the observer's origin, he obtained time dilation ( 2i ) in formal agreement with Radar coordinates. While introducing the concept of Born rigidity , Max Born (1909) [ H 14 ] noted that the formulas for hyperbolic motion can be used as transformations into a "hyperbolically accelerated reference system" ( German : hyperbolisch beschleunigtes Bezugsystem ) equivalent to ( 2d ). Born's work was further elaborated by Arnold Sommerfeld (1910) [ H 15 ] and Max von Laue (1911) [ H 16 ] who both obtained ( 2d ) using imaginary numbers, which was summarized by Wolfgang Pauli (1921) [ 16 ] who besides coordinates ( 2d ) also obtained metric ( 2e ) using imaginary numbers. Einstein (1912) [ H 17 ] studied a static gravitational field and obtained the Kottler–Møller metric ( 2b ) as well as approximations to formulas ( 2a ) using a coordinate dependent speed of light. [ 28 ] Hendrik Lorentz (1913) [ H 18 ] obtained coordinates similar to ( 2d , 2e , 2f ) while studying Einstein's equivalence principle and the uniform gravitational field.
A detailed description was given by Friedrich Kottler (1914), [ H 19 ] who formulated the corresponding orthonormal tetrad , transformation formulas and metric ( 2a , 2b ). Also Karl Bollert (1922) [ H 20 ] obtained the metric ( 2b ) in his study of uniform acceleration and uniform gravitational fields. In a paper concerned with Born rigidity, Georges Lemaître (1924) [ H 21 ] obtained coordinates and metric ( 2a , 2b ). Albert Einstein and Nathan Rosen (1935) described ( 2d , 2e ) as the "well known" expressions for a homogeneous gravitational field. [ H 22 ] After Christian Møller (1943) [ H 11 ] obtained ( 2a , 2b ) in as study related to homogeneous gravitational fields, he (1952) [ H 23 ] as well as Misner & Thorne & Wheeler (1973) [ 2 ] used Fermi–Walker transport to obtain the same equations.
While these investigations were concerned with flat spacetime, Wolfgang Rindler (1960) [ 14 ] analyzed hyperbolic motion in curved spacetime, and showed (1966) [ 15 ] the analogy between the hyperbolic coordinates ( 2d , 2e ) in flat spacetime with Kruskal coordinates in Schwarzschild space . This influenced subsequent writers in their formulation of Unruh radiation measured by an observer in hyperbolic motion, which is similar to the description of Hawking radiation of black holes .
Born (1909) showed that the inner points of a Born rigid body in hyperbolic motion can only be in the region X / ( X 2 − T 2 ) > 0 {\displaystyle X/\left(X^{2}-T^{2}\right)>0} . [ H 24 ] Sommerfeld (1910) defined that the coordinates allowed for the transformation between inertial and hyperbolic coordinates must satisfy T < X {\displaystyle T<X} . [ H 25 ] Kottler (1914) [ H 26 ] defined this region as X 2 − T 2 > 0 {\displaystyle X^{2}-T^{2}>0} , and pointed out the existence of a "border plane" ( German : Grenzebene ) c 2 / α + x {\displaystyle c^{2}/\alpha +x} , beyond which no signal can reach the observer in hyperbolic motion. This was called the "horizon of the observer" ( German : Horizont des Beobachters ) by Bollert (1922). [ H 27 ] Rindler (1966) [ 15 ] demonstrated the relation between such a horizon and the horizon in Kruskal coordinates.
Using Bollert's formalism, Stjepan Mohorovičić (1922) [ H 28 ] made a different choice for some parameter and obtained metric ( 2h ) with a printing error, which was corrected by Bollert (1922b) with another printing error, until a version without printing error was given by Mohorovičić (1923). In addition, Mohorovičić erroneously argued that metric ( 2b , now called Kottler–Møller metric) is incorrect, which was rebutted by Bollert (1922). [ H 29 ] Metric ( 2h ) was rediscovered by Harry Lass (1963), [ 13 ] who also gave the corresponding coordinates ( 2g ) which are sometimes called "Lass coordinates". [ 9 ] Metric ( 2h ), as well as ( 2a , 2b ), was also derived by Fritz Rohrlich (1963). [ 12 ] Eventually, the Lass coordinates ( 2g , 2h ) were identified with Radar coordinates by Desloge & Philpott (1987). [ 29 ] [ 8 ]
Useful background:
Rindler coordinates:
Rindler horizon: | https://en.wikipedia.org/wiki/Rindler_coordinates |
Ring-closing metathesis ( RCM ) is a widely used variation of olefin metathesis in organic chemistry for the synthesis of various unsaturated rings via the intramolecular metathesis of two terminal alkenes , which forms the cycloalkene as the E- or Z- isomers and volatile ethylene . [ 1 ] [ 2 ]
The most commonly synthesized ring sizes are between 5-7 atoms; [ 3 ] however, reported syntheses include 45- up to 90- membered macroheterocycles. [ 4 ] [ 5 ] [ 6 ] These reactions are metal-catalyzed and proceed through a metallacyclobutane intermediate. [ 7 ] It was first published by Dider Villemin in 1980 describing the synthesis of an Exaltolide precursor, [ 8 ] and later become popularized by Robert H. Grubbs and Richard R. Schrock , who shared the Nobel Prize in Chemistry , along with Yves Chauvin , in 2005 for their combined work in olefin metathesis . [ 9 ] [ 10 ] RCM is a favorite among organic chemists due to its synthetic utility in the formation of rings, which were previously difficult to access efficiently, and broad substrate scope. [ 11 ] Since the only major by-product is ethylene , these reactions may also be considered atom economic , an increasingly important concern in the development of green chemistry . [ 7 ]
There are several reviews published on ring-closing metathesis. [ 2 ] [ 3 ] [ 12 ] [ 13 ]
The first example of ring-closing metathesis was reported by Dider Villemin in 1980 when he synthesized an Exaltolide precursor using a WCl 6 /Me 4 Sn catalyzed metathesis cyclization in 60-65% yield depending on ring size (A) . [ 8 ] In the following months, Jiro Tsuji reported a similar metathesis reaction describing the preparation of a macrolide catalyzed by WCl 6 and dimethyltitanocene (Cp 2 TiMe 2 ) in a modest 17.9% yield (B) . [ 14 ] Tsuji describes the olefin metathesis reaction as “…potentially useful in organic synthesis” and addresses the need for the development of a more versatile catalyst to tolerate various functional groups.
In 1987, Siegfried Warwel and Hans Kaitker published a synthesis of symmetric macrocycles through a cross-metathesis dimerization of starting cycloolefins to afford C 14 , C 18 , and C 20 dienes in 58-74% yield, as well as C 16 in 30% yield, using Re 2 O 7 on Al 2 O 3 and Me 4 Sn for catalyst activation. [ 15 ]
After a decade since its initial discovery, Grubbs and Fu published two influential reports in 1992 detailing the synthesis of O- and N- heterocycles via RCM utilizing Schrock’s molybdenum alkylidene catalysts, which had proven more robust and functional group tolerant than the tungsten chloride catalysts. [ 16 ] [ 17 ] The synthetic route allowed access to dihydropyrans in high yield (89-93%) from readily available starting materials. [ 16 ] In addition, synthesis of substituted pyrrolines , tetrahydropyridines, and amides were illustrated in modest to high yield (73-89% ). [ 17 ] The driving force for the cyclization reaction was attributed to entropic favorability by forming two molecules per one molecule of starting material. The loss of the second molecule, ethylene , a highly volatile gas, drives the reaction in the forward direction according to Le Châtelier's principle . [ 16 ]
In 1993, Grubbs and others not only published a report on carbocycle synthesis using a molybdenum catalyst, [ 18 ] but also detailed the initial use of a novel ruthenium carbene complex for metathesis reactions, which later became a popular catalyst due to its extraordinary utility. The ruthenium catalysts are not sensitive to air and moisture, unlike the molybdenum catalysts. [ 19 ] The ruthenium catalysts, known better as the Grubbs Catalysts , as well as molybdenum catalysts, or Schrock’s Catalysts , are still used today for many metathesis reactions, including RCM. Overall, it was shown that metal-catalyzed RCM reactions were very effective in C-C bond forming reactions, and would prove of great importance in organic synthesis , chemical biology , materials science , and various other fields to access a wide variety of unsaturated and highly functionalized cyclic analogues. [ 2 ] [ 3 ]
The mechanism for transition metal -catalyzed olefin metathesis has been widely researched over the past forty years. [ 20 ] RCM undergoes a similar mechanistic pathway as other olefin metathesis reactions, such as cross metathesis (CM), ring-opening metathesis polymerization (ROMP) , and acyclic diene metathesis (ADMET) . [ 21 ] Since all steps in the catalytic cycle are considered reversible, it is possible for some of these other pathways to intersect with RCM depending on the reaction conditions and substrates. [ 12 ] In 1971, Chauvin proposed the formation of a metallacyclobutane intermediate through a [2+2] cycloaddition [ 21 ] [ 22 ] which then cycloeliminates to either yield the same alkene and catalytic species (a nonproductive pathway), or produce a new catalytic species and an alkylidene (a productive pathway). [ 23 ] This mechanism has become widely accepted among chemists and serves as the model for the RCM mechanism. [ 24 ]
Initiation occurs through substitution of the catalyst’s alkene ligand with substrate. This process occurs via formation of a new alkylidene through one round of [2+2] cycloaddition and cycloelimination. Association and dissociation of a phosphine ligand also occurs in the case of Grubbs catalysts. [ 25 ] In an RCM reaction, the alkylidene undergoes an intramolecular [2+2] cycloaddition with the second reactive terminal alkene on the same molecule, rather than an intermolecular addition of a second molecule of starting material, a common competing side reaction which may lead to polymerization [ 26 ] Cycloelimination of the metallacyclobutane intermediate forms the desired RCM product along with a [M]=CH 2 , or alkylidene , species which reenters the catalytic cycle. While the loss of volatile ethylene is a driving force for RCM, [ 24 ] it is also generated by competing metathesis reactions and therefore cannot be considered the only driving force of the reaction. [ 2 ]
The reaction can be under kinetic or thermodynamic control depending on the exact reaction conditions, catalyst, and substrate. Common rings, 5- through 7-membered cycloalkenes, have a high tendency for formation and are often under greater thermodynamic control due to the enthalpic favorability of the cyclic products, as shown by Illuminati and Mandolini on the formation of lactone rings. [ 27 ] Smaller rings, between 5 and 8 atoms, are more thermodynamically favored over medium to large rings due to lower ring strain . Ring strain arises from abnormal bond angles resulting in a higher heat of combustion relative to the linear counterpart. [ 27 ] If the RCM product contains a strained olefin, polymerization becomes preferable through ring-opening metathesis polymerization of the newly formed olefin. [ 28 ] Medium rings in particular have greater ring strain, in part due to greater transannular interactions from opposing sides of the ring, but also the inability to orient the molecule in such a way to prevent penalizing gauche interactions . [ 27 ] [ 29 ] RCM may be considered to have a kinetic bias if the products cannot reenter the catalytic cycle or interconvert through an equilibrium. A kinetic product distribution could lead to mostly RCM products or may lead to oligomers and polymers, which are most often disfavored. [ 2 ]
With the advent of more reactive catalysts, equilibrium RCM is observed quite often which may lead to a greater product distribution. The mechanism can be expanded to include the various competing equilibrium reactions as well as indicate where various side-products are formed along the reaction pathway, such as oligomers. [ 30 ]
Although the reaction is still under thermodynamic control, an initial kinetic product , which may be dimerization or oligomerization of the starting material, is formed at the onset of the reaction as a result of higher catalyst reactivity. Increased catalyst activity also allows for the olefin products to reenter the catalytic cycle via non-terminal alkene addition onto the catalyst. [ 2 ] [ 31 ] [ 32 ] Due to additional reactivity in strained olefins, an equilibrium distribution of products is observed; however, this equilibrium can be perturbed through a variety of techniques to overturn the product ratios in favor of the desired RCM product. [ 33 ] [ 34 ]
Since the probability for reactive groups on the same molecule to encounter each other is inversely proportional to the ring size, the necessary intramolecular cycloaddition becomes increasingly difficult as ring size increases. This relationship means that the RCM of large rings is often performed under high dilution (0.05 - 100 mM) (A) [ 2 ] [ 35 ] to reduce intermolecular reactions; while the RCM of common rings can be performed at greater concentrations, even neat in rare cases. [ 36 ] [ 37 ] The equilibrium reaction can be driven to the desired thermodynamic products by increasing temperature (B) , to decrease viscosity of the reaction mixture and therefore increase thermal motion, as well as increasing or decreasing reaction time (C) . [ 30 ] [ 38 ]
Catalyst choice (D) has also been shown to be critical in controlling product formation. A few of the catalysts commonly used in ring-closing metathesis are shown below. [ 11 ] [ 39 ] [ 40 ] [ 41 ]
Ring-closing Metathesis has shown utility in the synthesis of 5-30 membered rings, [ 42 ] polycycles, and heterocycles containing atoms such as N , O , S , P , and even Si . [ 2 ] [ 3 ] [ 43 ] [ 44 ] Due to the functional group tolerance of modern RCM reactions, the synthesis of structurally complex compounds containing a range of functional groups such as epoxides , ketones , alcohols , ethers , amines , amides , and many others can be achieved more easily than previous methods. Oxygen and nitrogen heterocycles dominate due to their abundance in natural products and pharmaceuticals. Some examples are shown below (the red alkene indicates C-C bond formed through RCM). [ 3 ]
In addition to terminal alkenes , tri- and tetrasubstituted alkenes have been used in RCM reactions to afford substituted cyclic olefin products. [ 32 ] Ring-closing metathesis has also been used to cyclize rings containing an alkyne to produce a new terminal alkene , or even undergo a second cyclization to form bicycles. This type of reaction is more formally known as enyne ring-closing metathesis . [ 7 ] [ 45 ]
In RCM reactions, two possible geometric isomers , either E- or Z- isomer , may be formed. Stereoselectivity is dependent on the catalyst, ring strain, and starting diene. In smaller rings, Z- isomers predominate as the more stable product reflecting ring-strain minimization. [ 46 ] In macrocycles, the E- isomer is often obtained as a result of the thermodynamic bias in RCM reactions as E- isomers are more stable compared to Z- isomers . As a general trend, ruthenium NHC (N-heterocyclic carbene) catalysts favor E selectivity to form the trans isomer. This in part due to the steric clash between the substituents, which adopt a trans configuration as the most stable conformation in the metallacyclobutane intermediate, to form the E- isomer . [ 21 ] The synthesis of stereopure Z- isomers were previously achieved via ring-closing alkyne metathesis . However, in 2013 Grubbs reported the use of a chelating ruthenium catalyst to afford Z macrocycles in high selectivity. The selectivity is attributed to the increased steric clash between the catalyst ligands and the metallacyclobutane intermediate that is formed. The increased steric interactions in the transition state lead to the Z olefin rather than the E olefin, because the transition state required to form the E- isomer is highly disfavored. [ 47 ]
Additives are also used to overturn conformational preferences, increase reaction concentration, and chelate highly polar groups, such as esters or amides , which can bind to the catalyst. [ 2 ] Titanium isopropoxide (Ti(O i Pr) 4 ) is commonly used to chelate polar groups to prevent catalyst poisoning and in the case of an ester, the titanium Lewis acid binds the carbonyl oxygen. Once the oxygen is chelated with the titanium it can no longer bind to the ruthenium metal of the catalyst, which would result in catalyst deactivation. This also allows the reaction to be run at a higher effective concentration without dimerization of starting material. [ 48 ]
Another classic example is the use of a bulky Lewis acid to form the E- isomer of an ester over the preferred Z -isomer for cyclolactonization of medium rings. In one study, the addition of aluminum tris(2,6-diphenylphenoxide) (ATPH) was added to form a 7-membered lactone. The aluminum binds with the carbonyl oxygen forcing the bulky diphenylphenoxide groups in close proximity to the ester compound. As a result, the ester adopts the E- isomer to minimize penalizing steric interactions. Without the Lewis acid , only the 14-membered dimer ring was observed. [ 49 ]
By orienting the molecule in such a way that the two reactive alkenes are in close proximity, the risk of intermolecular cross-metathesis is minimized.
Many metathesis reactions with ruthenium catalysts are hampered by unwanted isomerization of the newly formed double bond, and it is believed that ruthenium hydrides that form as a side reaction are responsible. In one study [ 50 ] it was found that isomerization is suppressed in the RCM reaction of diallyl ether with specific additives capable of removing these hydrides . Without an additive, the reaction product is 2,3-dihydrofuran (2,3-DHF) and not the expected 2,5-dihydrofuran (2,5-DHF) together with the formation of ethylene gas. Radical scavengers, such as TEMPO or phenol , do not suppress isomerization ; however, additives such as 1,4-benzoquinone or acetic acid successfully prevent unwanted isomerization . Both additives are able to oxidize the ruthenium hydrides which may explain their behavior.
Another common problem associated with RCM is the risk of catalyst degradation due to the high dilution required for some cyclizations. High dilution is also a limiting factor in industrial applications due to the large amount of waste generated from large-scale reactions at a low concentration. [ 2 ] Efforts have been made to increase reaction concentration without compromising selectivity. [ 51 ]
Ring-closing metathesis has been used historically in numerous organic syntheses and continues to be used today in the synthesis of a variety of compounds. The following examples are only representative of the broad utility of RCM, as there are numerous possibilities. For additional examples see the many review articles. [ 2 ] [ 3 ] [ 13 ] [ 42 ]
Ring-closing metathesis is important in total synthesis . One example is its use in the formation of the 12-membered ring in the synthesis of the naturally occurring cyclophane floresolide. Floresolide B was isolated from an ascidian of the genus Apidium and showed cytotoxicity against KB tumor cells. In 2005, K. C. Nicolaou and others completed a synthesis of both isomers through late-stage ring-closing metathesis using the 2nd Generation Grubbs catalyst to afford a mixture of E- and Z- isomers (1:3 E/Z ) in 89% yield. Although one prochiral center is present the product is racemic . Floresolide is an atropisomer as the new ring forms (due to steric constraints in the transition state) passing through the front of the carbonyl group in and not the back. The carbonyl group then locks the ring permanently in place. The E/Z isomers were then separated and then the phenol nitrobenzoate protective group was removed in the final step by potassium carbonate to yield the final product and the unnatural Z -isomer. [ 52 ]
In 1995, Robert Grubbs and others highlighted the stereoselectivity possible with RCM. The group synthesized a diene with an internal hydrogen bond forming a β-turn. The hydrogen bond stabilized the macrocycle precursor placing both dienes in close proximity, primed for metathesis. After subjecting a mixture of diastereomers to the reaction conditions, only one diastereomer of the olefin β-turn was obtained. The experiment was then repeated with ( S,S,S ) and ( R,S,R ) peptides . Only the ( S,S,S ) diastereomer was reactive illustrating the configuration needed for ring-closing to be possible. The olefin product’s absolute configuration mimics that of Balaram’s disulfide peptide. [ 53 ]
The ring strain in 8-11 atom rings has proven to be challenging for RCM; however, there are many cases where these cyclic systems have been synthesized. [ 3 ] In 1997, Fürstner reported a facile synthesis to access jasmine ketolactone ( E/Z ) through a final RCM step. At the time, no previous 10-membered ring had been formed through RCM, and previous syntheses were often lengthy, involving a macrolactonization to form the decanolide. By adding the diene and catalyst over a 12-hour period to refluxing toluene, Fürstner was able to avoid oligomerization and obtain both E/Z isomers in 88% yield. CH 2 Cl 2 favored the formation of the Z- isomer in 1:2.5 ( E/Z ) ratio, whereas, toluene only afforded a 1:1.4 ( E/Z ) mixture. [ 54 ]
In 2000, Alois Fürstner reported an eight step synthesis to access (−)-balanol using RCM to form a 7-member heterocycle intermediate. Balanol is a metabolite isolated from erticiullium balanoides and shows inhibitory action towards protein kinase C (PKC) . In the ring closing metathesis step, a ruthenium indenylidene complex was used as the precatalyst to afford the desired 7-member ring in 87% yield. [ 55 ]
In 2002, Stephen F. Martin and others reported the 24-step synthesis of manzamine A with two ring-closing metathesis steps to access the polycyclic alkaloid . [ 56 ] The natural product was isolated from marine sponges off the coast of Okinawa. Manzamine is a good target due to its potential as an antitumor compound. The first RCM step was to form the 13-member D ring as solely the Z -isomer in 67% yield, a unique contrast to the usual favored E -isomer of metathesis. After further transformations, the second RCM was used to form the 8-member E ring in 26% yield using stoichiometric 1st Generation Grubbs catalyst. The synthesis highlights the ability for functional group tolerance metathesis reactions as well as the ability to access complex molecules of varying ring sizes. [ 56 ]
In 2003, Danishefsky and others reported the total synthesis of (+)-migrastatin , a macrolide isolated from Streptomyces which inhibited tumor cell migration. [ 57 ] The macrolide contains a 14-member heterocycle that was formed through RCM. The metathesis reaction yielded the protected migrastatin in 70% yield as only the ( E,E,Z ) isomer. It is reported that this selectivity arises from the preference for the ruthenium catalyst to add to the less hindered olefin first then cyclize to the most accessible olefin. The final deprotection of the silyl ether yielded (+)-migrastatin . [ 57 ]
Overall, ring-closing metathesis is a highly useful reaction to readily obtain cyclic compounds of varying size and chemical makeup; however, it does have some limitations such as high dilution, selectivity, and unwanted isomerization. | https://en.wikipedia.org/wiki/Ring-closing_metathesis |
RingGo is a pay by phone parking service, based in the UK owned by EasyPark Group .
The system is used by local authorities for on-street parking and public car parks. [ 1 ] [ 2 ] In 2023 RingGo had 10 million yearly customers, revenue of £30m and operating profit of £6m. [ 3 ]
It has been suggested that councils that use RingGo are unfairly penalising users who find the technology difficult to use or do not own a smart phone. [ 4 ] This has meant reduced customer numbers in some shopping centres that switched to smartphone-only parking. [ 5 ]
Some car parks use the app's "Start-Stop" system that requires that users log both their arrival and departure time and failure to remember to sign out can lead to an overcharge. [ 6 ] Not all car parks operate using this system.
On 10 December 2023 RingGo’s parent company, EasyPark Group, suffered a breach of customer names, addresses, email addresses, phone numbers and partial payment card information. Over two weeks later on 26 December EasyPark informed the UK’s ICO and other data protection authorities. [ 7 ] | https://en.wikipedia.org/wiki/RingGo |
In chemistry , a ring is an ambiguous term referring either to a simple cycle of atoms and bonds in a molecule or to a connected set of atoms and bonds in which every atom and bond is a member of a cycle (also called a ring system ). A ring system that is a simple cycle is called a monocycle or simple ring , and one that is not a simple cycle is called a polycycle or polycyclic ring system . A simple ring contains the same number of sigma bonds as atoms, and a polycyclic ring system contains more sigma bonds than atoms.
A molecule containing one or more rings is called a cyclic compound , and a molecule containing two or more rings (either in the same or different ring systems) is termed a polycyclic compound . A molecule containing no rings is called an acyclic or open-chain compound .
A homocycle or homocyclic ring is a ring in which all atoms are of the same chemical element . [ 1 ] A heterocycle or heterocyclic ring is a ring containing atoms of at least two different elements, i.e. a non-homocyclic ring. [ 2 ] A carbocycle or carbocyclic ring is a homocyclic ring in which all of the atoms are carbon . [ 3 ] An important class of carbocycles are alicyclic rings , [ 4 ] and an important subclass of these are cycloalkanes .
In common usage the terms "ring" and "ring system" are frequently interchanged, with the appropriate definition depending upon context. Typically a "ring" denotes a simple ring, unless otherwise qualified, as in terms like " polycyclic ring ", " fused ring ", " spiro ring " and " indole ring ", where clearly a polycyclic ring system is intended. Likewise, a "ring system" typically denotes a polycyclic ring system, except in terms like "monocyclic ring system" or " pyridine ring system ". To reduce ambiguity, IUPAC 's recommendations on organic nomenclature avoid the use of the term "ring" by using phrases such as "monocyclic parent" and "polycyclic ring system". [ 5 ] | https://en.wikipedia.org/wiki/Ring_(chemistry) |
Ring and Ball Apparatus is used to determine the softening point of bitumen , waxes, LDPE , HDPE /PP blend granules, rosin and solid hydrocarbon resins. [ 1 ] The apparatus was first designed in the 1910s while ASTM adopted a test method in 1916. This instrument is ideally used for materials having softening point in the range of 30 °C to 157 °C. [ 2 ] [ 3 ] [ 4 ]
The solid sample is taken in a Petri dish and melted by heating it on a standard hot plate . The bubble free liquefied sample is poured from the Petri dish and cast into the ring. The brass shouldered rings in this apparatus have 6.4 mm depth. The cast sample in the ring is kept undisturbed for one hour to solidify. Excess material is removed with a hot knife. The ring is set with the ball on top with ball guides on the grooved plate within the heating bath. As the temperature rises, the balls begin to sink through the rings carrying a portion of the softened sample with it. The temperature at which the steel balls touch the bottom plate determines the softening point in degrees Celsius. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Ring_and_Ball_Apparatus |
In organic chemistry , a ring flip (also known as a ring inversion or ring reversal ) is the interconversion of cyclic conformers that have equivalent ring shapes (e.g., from a chair conformer to another chair conformer) that results in the exchange of nonequivalent substituent positions. [ 1 ] The overall process generally takes place over several steps, involving coupled rotations about several of the molecule's single bonds , in conjunction with minor deformations of bond angles . Most commonly, the term is used to refer to the interconversion of the two chair conformers of cyclohexane derivatives , which is specifically referred to as a chair flip , although other cycloalkanes and inorganic rings undergo similar processes.
As stated above, a chair flip is a ring inversion specifically of cyclohexane (and its derivatives) from one chair conformer to another, often to reduce steric strain . The term, "flip" is misleading, because the direction of each carbon remains the same; what changes is the orientation. A conformation is a unique structural arrangement of atoms, in particular one achieved through the rotation of single bonds. [ 2 ] A conformer is a conformational isomer , a blend of the two words.
There exist many different conformations for cyclohexane, such as chair, boat, and twist-boat, but the chair conformation is the most commonly observed state for cyclohexanes because it requires the least amount of energy. [ 3 ] The chair conformation minimizes both angle strain and torsional strain by having all carbon-carbon bonds at 110.9° and all hydrogens staggered from one another. [ 2 ]
The molecular motions involved in a chair flip are detailed in the figure on the right: The half-chair conformation ( D , 10.8 kcal/mol, C 2 symmetry) is the energy maximum when proceeding from the chair conformer ( A , 0 kcal/mol reference, D 3d symmetry) to the higher energy twist-boat conformer ( B , 5.5 kcal/mol, D 2 symmetry). The boat conformation ( C , 6.9 kcal/mol, C 2v symmetry) is a local energy maximum for the interconversion of the two mirror image twist-boat conformers, the second of which is converted to the other chair confirmation through another half-chair. At the end of the process, all axial positions have become equatorial and vice versa. The overall barrier of 10.8 kcal/mol corresponds to a rate constant of about 10 5 s –1 at room temperature.
Note that the twist-boat ( D 2 ) conformer and the half-chair ( C 2 ) transition state are in chiral point groups and are therefore chiral molecules. In the figure, the two depictions of B and two depictions of D are pairs of enantiomers.
As a consequence of the chair flip, the axially-substituted and equatorially-substituted conformers of a molecule like chlorocyclohexane cannot be isolated at room temperature. However, in some cases, the isolation of individual conformers of substituted cyclohexane derivatives has been achieved at low temperatures (–150 °C). [ 4 ]
As noted above, by transitioning from one chair conformer to another, all axial positions become equatorial and all equatorial positions become axial. Substituent groups in equatorial positions roughly follow along the equator of the cyclohexane ring and are perpendicular to the axis, while substituents in axial positions roughly follow the imaginary axis of the carbon ring and are perpendicular to the equator. [ 5 ]
Diaxial interactions or axial-axial interactions is what the steric strain between an axial substituent and another axial group, typically a hydrogen, on the same side of a chair conformation ring. The interaction is labeled by the carbon number they come from. A 1,3-diaxial interaction happens between the atoms connected to the first and third carbons. The more interactions the more strain on the molecule and the conformations with the most strain are less likely to be seen. An example is cyclopropane which, because of its planar geometry, has six fully eclipsed carbon and axial hydrogen bonds making the strain 116 kJ/mol (27.7 kcal/mol) . [ 5 ] Strain can also be decreased when the carbon-carbon bond angles are close or at the preferred bond angle of 109.5°, meaning a ring having six tetrahedral carbons is typically lower than that of most rings.
Cyclohexane is a prototype for low-energy degenerate ring flipping. Two 1 H NMR signals should be observed in principle, corresponding to axial and equatorial protons. However, due to the cyclohexane chair flip, only one signal is seen for a solution of cyclohexane at room temperature, as the axial and equatorial proton rapidly interconvert relative to the NMR time scale. The coalescence temperature at 60 MHz is ca. –60 °C. [ 6 ] As a consequence of the chair flip, the axially-substituted and equatorially-substituted conformers of a molecule like chlorocyclohexane cannot be isolated at room temperature.
However, in some cases, the isolation of individual conformers of substituted cyclohexane derivatives has been achieved at low temperatures (–150 °C).
Most compounds with nonplanar rings engage in degenerate ring flipping. One well-studied example is titanocene pentasulfide , where the inversion barrier is high relative to cyclohexane's. Hexamethylcyclotrisiloxane on the other hand is subject to a very low barrier.
Bicycloalkanes are alkanes containing two rings that are connected to each other by sharing two carbon atoms. Orientation within bicycloalkanes is dependent on the cis or trans orientation of the hydrogen shared by the different rings instead of the methyl groups present in the rings. [ 7 ]
Tetrodotoxin is one of the world's most potent toxins. It is made up of multiple six member rings set in chair conformations, with each ring but one containing an atom other than carbon. | https://en.wikipedia.org/wiki/Ring_flip |
A ring forming reaction or ring-closing reaction in organic chemistry is an umbrella term for a variety of reactions that introduce one or more rings into a molecule. A heterocycle forming reaction is a reaction that introduces a new heterocycle . [ 1 ] [ 2 ] Important classes of ring forming reactions include annulations [ 3 ] and cycloadditions . Heterocyclic compounds are useful in spectroscopic identification of compounds, purity criteria, and investigating the molecular electronic structures. [ 4 ]
Named ring forming reactions include (not exhaustive): | https://en.wikipedia.org/wiki/Ring_forming_reaction |
In the geometry of circle packings in the Euclidean plane , the ring lemma gives a lower bound on the sizes of adjacent circles in a circle packing. [ 1 ]
The lemma states: Let n {\displaystyle n} be any integer greater than or equal to three. Suppose that the unit circle is surrounded by a ring of n {\displaystyle n} interior-disjoint circles, all tangent to it, with consecutive circles in the ring tangent to each other. Then the minimum radius of any circle in the ring is at least the unit fraction 1 F 2 n − 3 − 1 {\displaystyle {\frac {1}{F_{2n-3}-1}}} where F i {\displaystyle F_{i}} is the i {\displaystyle i} th Fibonacci number . [ 1 ] [ 2 ]
The sequence of minimum radii, from n = 3 {\displaystyle n=3} , begins
Generalizations to three-dimensional space are also known. [ 3 ]
An infinite sequence of circles can be constructed, containing rings for each n {\displaystyle n} that exactly meet the bound of the ring lemma, showing that it is tight. The construction allows halfplanes to be considered as degenerate circles with infinite radius, and includes additional tangencies between the circles beyond those required in the statement of the lemma. It begins by sandwiching the unit circle between two parallel halfplanes; in the geometry of circles , these are considered to be tangent to each other at the point at infinity . Each successive circle after these first two is tangent to the central unit circle and to the two most recently added circles; see the illustration for the first six circles (including the two halfplanes) constructed in this way. The first n {\displaystyle n} circles of this construction form a ring, whose minimum radius can be calculated by Descartes' theorem to be the same as the radius specified in the ring lemma. This construction can be perturbed to a ring of n {\displaystyle n} finite circles, without additional tangencies, whose minimum radius is arbitrarily close to this bound. [ 4 ]
A version of the ring lemma with a weaker bound was first proven by Burton Rodin and Dennis Sullivan as part of their proof of William Thurston 's conjecture that circle packings can be used to approximate conformal maps . [ 5 ] Lowell Hansen gave a recurrence relation for the tightest possible lower bound, [ 6 ] and Dov Aharonov found a closed-form expression for the same bound. [ 2 ]
Beyond its original application to conformal mapping, [ 5 ] the circle packing theorem and the ring lemma play key roles in a proof by Keszegh, Pach, and Pálvölgyi that planar graphs of bounded degree can be drawn with bounded slope number . [ 7 ] | https://en.wikipedia.org/wiki/Ring_lemma |
In an electrical power distribution system, a ring main unit ( RMU ) is a factory assembled, metal enclosed set of switchgear used at the load connection points of a ring-type distribution network. It includes in one unit two switches that can connect the load to either or both main conductors, and a fusible switch or circuit breaker and switch that feed a distribution transformer . [ 1 ] The metal enclosed unit connects to the transformer either through a bus throat of standardized dimensions, or else through cables and is usually installed outdoors. Ring main cables enter and leave the cabinet. This type of switchgear is used for medium-voltage power distribution, from 7200 volts to about 36000 volts.
The ring main unit was introduced in the United Kingdom and is now widely used in other countries. In North American distribution practice, often the equivalent of a ring main unit is built into a pad-mounted transformer which integrates switches and transformer into a single cabinet.
Ring main units can be characterized by their type of insulation : air , oil or gas . The switch used to isolate the transformer can be a fusible switch, or may be a circuit breaker using vacuum or gas-insulated interrupters. The unit may also include protective relays to operate the circuit breaker on a fault. | https://en.wikipedia.org/wiki/Ring_main_unit |
The Xbox 360 video game console was subject to a number of technical problems and failures, some as a result of design flaws. Some issues could be identified by a pattern of red lights on the front face of the console; these colloquially became known as the " Red Ring of Death " or the " RRoD ". [ 1 ] [ 2 ] There were also other issues, such as discs becoming scratched in the drive and " bricking " of consoles due to dashboard updates.
There were many conflicting estimates of the console's unusually high failure rate . [ 3 ] [ 4 ] [ 5 ] The warranty provider SquareTrade estimated it at 23.7% in 2009, [ 6 ] while a Game Informer survey reported 54.2%. [ 7 ] Among the consoles owned by employees of Joystiq , which saw heavy use for games journalism purposes, the failure rate had reached 90% by the end of 2007. [ 8 ] The crisis was ultimately abated from 2009 by design revisions to the later-produced Xbox models; the S model in particular was far more resilient. By 2012 the failure rate for the Xbox 360 family was comparable to the PS3 failure rate. [ 9 ]
The issues proved extremely damaging for Microsoft . Repairs and shipping of replacement hardware cost the company $1.15bn. The issues triggered multiple lawsuits , [ 10 ] cost the Xbox ground in the console wars and threatened the long term viability of the Xbox brand. [ 11 ]
The design of the Xbox 360 was a hurried process subject to a number of late changes. This included the addition of a hard disk drive, which compromised airflow in the machine. The holes in the case were added to try and ameliorate this airflow issue. Time pressures also resulted in insufficient testing. Microsoft were aware of a myriad of technical challenges as early as August 2005, including "overheating graphics chips, cracking heat sinks, cosmetic issues with the hard disk and the front of the box, underperforming graphics memory chips from Infineon , a problem with the DVD drive - and more". Thermal issues with the GPU were ultimately what caused the infamous "Red Ring" issues, while the DVD drive issue was later responsible for scratching discs. An engineer requested a shut down of the production line that month, but this did not occur out of fear of a delay to console delivery in some regions. [ 12 ]
The console launched in November 2005 in North America, swiftly followed by other regions. However, consoles began failing "almost immediately". Microsoft initially dismissed these concerns as "isolated reports", that were within the normal range of failure (around 2%). [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] In late 2005, Microsoft's internal data was reporting a failure rate during manufacturing of around 6-7%. These consoles were not shipped to consumers but remained in warehouses. By March 2006, around 30% of consoles manufactured were either returned or had failed checks at the factory. At one point Microsoft's yield was as low as 32% (meaning a failure rate of 68%) [ 17 ]
Peter Moore , the Vice President of Microsoft's Interactive Entertainment Business division in 2015 detailed a conversation he had with Microsoft CEO Steve Ballmer on his planned response to the incident in the mid 2000s. He stated:
"...here's what we have to do: we need to FedEx an empty box to a customer who had a problem - they would call us up - with a FedEx return label to send your box, and then we would FedEx it back to them and fix it. ... I always remember $240m of that was FedEx. ... It was sickening. I was doing a lot of interviews. ... We couldn't figure it out. ... There was a theory. We had changed our solder, which is the way you put the GPU and the fans, to lead-free. ... We think it was somehow the heat coming off the GPU was drying out some of the solder, and it wasn't the normal stuff we'd used, because we had to meet European Standards and take the lead out. ... He said, 'what's it going to cost?' I remember taking a deep breath, looking at Robbie, and saying, 'we think it's $1.15bn, Steve.' He said, 'do it.' There was no hesitation. ... If we hadn't made that decision there and then, and tried to fudge over this problem, then the Xbox brand and Xbox One wouldn't exist today." [ 11 ]
In July 2007 Moore published an open letter recognizing the console's problems, as well as announcing a three-year warranty from the original date of purchase for every Xbox 360 console that experienced the "general hardware failure" (Red Ring) issue. [ 18 ] That October a class action lawsuit was brought against Microsoft due to the problems the console had with disc scratching, which could render games unplayable. [ 19 ] The case was lengthy and worked through the court system over the following decade, with litigation focusing on the validity of class certification. In 2017 the matter was decided by the United States Supreme Court in Microsoft Corp. v. Baker , which settled in favour of Microsoft.
During the Game Developers Conference in February 2008, Microsoft announced that the failure rate had "dropped", but did not mention any specifics. [ 20 ] The same month, electronics warranty provider SquareTrade published an examination of 1040 Xbox 360s and said that they suffered from a failure rate of 16.4%. Of the 171 failures, 60% were due to a general hardware failure (and thus fell under the 3 year extended warranty). And of the remaining 40% which were not covered by the extended warranty, 18% were disc read errors, 13% were video card failures, 13% were hard drive freezes, 10% were power issues and 7% were disc tray malfunctions. [ 21 ] [ 22 ] SquareTrade also stated that its estimates are likely significantly lower than reality due to the time span of the sample (six to ten months), the eventual failure of many consoles that did not occur within this time span and the fact that most owners did not deal with SquareTrade and had their consoles repaired directly through Microsoft via the extended RROD warranty.
From 2009 the crisis began to abate due to design revisions. The Jasper models sold that year had a failure rate of under 4%, with the overall product family rate at around 12% in the first quarter. [ 23 ] The Xbox 360 S launched in 2010 and had a far lower failure rate. The S models did not include segmented outer ring lights like the launch model, and were not included in the extended warranty. [ 24 ] The 360 family as a whole was discontinued in 2016, but Microsoft continued to offer repairs for a time after that. [ 25 ]
Microsoft did not reveal the full technical details of the problem until a 2021 documentary on the history of Xbox, though earlier independant investigations had correctly indentified the issues with the GPU and soldering. [ 26 ] In a nod to the incident, Microsoft sold Red Ring holiday sweaters in December 2024. The item was popular among Microsoft employees. [ 27 ]
The launch model of the Xbox 360 includes four lights in a ring around the power button, on the front face of the console. Green indicated normal operation, while red lights were used for error codes. Most famously, three red lights indicated a "general hardware failure". [ 28 ] The error was coined the "Red Ring of Death" after Windows ' Blue Screen of Death error. The error was sometimes preceded by freeze-ups, graphical problems in the middle of gameplay, such as checkerboard or pinstripe patterns on the screen, and sound errors; mostly consisting of extremely loud noises that couldn't be affected by the volume control, and the console only responding when the power button was pressed to turn it off. [ 29 ] The problem was most prevalent in early models.
This error code was usually caused by the failure of one or more hardware components, although it could indicate that the console is not receiving enough power from the power supply. This coould be caused by a faulty or improperly connected power supply. The three flashing lights could also be caused by power surges. Unplugging and restarting the console fixed this issue in some cases. [ 30 ] [ 31 ]
On the Xbox 360 S and E models, the power button utilizes a different design that does not incorporate the same style of lighting that past models used. [ 32 ] A flashing red light means that the console is overheating, similar to the two-light error code on the original model Xbox 360; however, an on-screen message also appears, telling the user that the console will automatically power off to protect itself from overheating. A solid red light is similar to the one-light error if an "E XX" error message is displayed and a three-light error code if the error message is absent.
The related E74 error caused only a single of the red ring quadrants to illuminate, and the screen to display an error message in multiple languages: "System Error. Contact Xbox Customer Support", with the code E74 at the bottom. Much like the infamous Red Ring issue, the error was related to connection issues with the GPU, but could also be caused by a more general GPU failure or failing eDRAM. The E74 issue was covered by the three-year extended warranty from 2009 as Microsoft considered it part of the same issue as the Red Ring, and customers who previously paid Microsoft for out-of-warranty service to correct the E74 error received a refund. [ 33 ] [ 34 ] [ 35 ]
The console would illuminate all four lights if it could not detect an AV cable. This was not triggered by later revisions of the console which included an HDMI port. In some cases the four lights indicated a more serious problem with the console, followed by a 2-digit error code. [ 36 ] The four lights would also be illuminated briefly by power issues such as surges or brief outages.
Microsoft did not reveal the cause of the issues publicly until 2021, when a 6-part documentary on the history of Xbox was released. The Red Ring issue was caused by the cracking of solder joints inside the GPU flip chip package, connecting the GPU to the substrate interposer, as a result of thermal stress from heating up and cooling back down when the system is power cycled. [ 37 ] Microsoft had switched to lead-free solder due to regulations in the European Union , but using the incorrect alternative resulted in fracturing. [ 12 ]
While the cause was not confirmed by Microsoft until 2021, many independant investigations came to similar conclusions at the time, identifying thermal stress on the GPU and the solder as the culprit. The German computer magazine c't blamed the problem primarily on the use of the wrong type of lead-free solder, a type that when exposed to elevated temperatures for extended periods of time becomes brittle and can develop hair-line cracks that are almost irreparable. [ 38 ] Microsoft designed the chip in-house to cut out the traditional ASIC vendor with the goal of saving money in ASIC design costs. After multiple product failures, Microsoft went back to an ASIC vendor and had the chip redesigned so it would dissipate more heat. [ 39 ] [ 40 ]
The Guardian also claimed that using Xbox Kinect with an old Xenon generation Xbox would cause the Red Ring, but this was denied by Microsoft. [ 41 ]
The design of the disc drive was flawed, and could cause scratches on discs, particularly if the console was moved while the disc was spinning. Unlike the Red Ring issues, the disc scratching was not resolved by hardware revisions and was present in the S and E models. Those versions shipped with a sticker informing users that moving the console while powered on posed a risk. [ 42 ] Even on static footing however, normal floor vibrations that would occur in a household environment were enough to cause disc scratches. [ 43 ] The issue was particularly prevalent in 2006 models.
The issue was subject to multiple independant investigations, initially by the Dutch television program Kassa and later by the European Commissioner for Consumer Protection and the BBC . The BBC investigation in particular involved laboratory conditions for testing. [ 44 ] The issue ultimately led to a Supreme Court case which was ruled in favour of Microsoft in 2017. [ 45 ] [ 46 ]
Although discs scratched by the Xbox 360 were not covered under its warranty, [ 47 ] Microsoft's Xbox Disc Replacement Program [ 48 ] sold customers a new copy of discs scratched by the Xbox 360, if they were published in countries where the Xbox was originally sold, at a cost of $20. [ 49 ] The published list of games that qualify, however, was limited. [ 50 ] Third party games were only ever replaced at the discretion of the publishers. Electronic Arts for example offered replacements made within 90 days of purchase. [ 51 ]
Independant investigations concluded that the disc drives lacked a mechanism to secure the disc solidly in place. [ 52 ] Tilting or moving the console, when operating with a disc spinning inside, can potentially cause damage to the disc and in some cases render the disc unplayable as a result. [ 53 ] Microsoft engineers were aware of the issue ahead of launch, around September or October of 2005. However, installing "bumpers" to prevent the discs moving out of alignment would have added 50 cents to the production cost of each console, and was not implemented. An alternative would have been to slow the disc rotation speed but this would have led to increased loading times, and magnetic adjustments would not have been possible due to the disc tray locking mechanism. [ 54 ]
Several Xbox 360 system updates caused major issues for users.
An update patch released on November 1, 2006 was reported to " brick " consoles, rendering them useless. [ 55 ] The most obvious issue occurs after the installation of the patch, after which the console immediately reboots and shows an error message. Usually, error code E71 is shown during or directly after the booting animation.
In response to the November 2006 update error that "bricked" his console, a California man filed a class action lawsuit against Microsoft in Washington federal court in early December 2006. [ 56 ] The lawsuit seeks $5 million in damages and the free repair of any console rendered unusable by the update. This was the second such lawsuit filed against Microsoft, the first having been filed in December 2005, shortly after the 360's launch. Following Microsoft's extension of the Xbox 360 warranty to a full year, from the previous 90 days, the California man's attorney confirmed to the Seattle Post Intelligencer that the lawsuit had been resolved under confidential terms. [ 57 ]
On November 19, 2008, Microsoft released the " New Xbox Experience " (NXE). This update provided streaming Netflix capability and avatars; however, some users have reported the update has caused their consoles to not properly read optical media. [ 1 ] Others have reported that the update has disabled audio through HDMI connections. [ 58 ] A Microsoft spokesperson stated the company is "aware that a handful of Xbox LIVE users are experiencing audio issues, and are diligently monitoring this issue and working towards a solution." Microsoft released a patch on February 3, 2009 for the HDMI audio issues. [ 59 ]
A patch released in May 2011 prevented some users from playing games from discs. The update involved "a change in the disc reading algorithms", but would simply inform users that the disc was unreadable and ask them to clean it with a cloth. [ 60 ]
In 2007, the official steering wheel peripheral faced issues with overheating and releasing smoke, prompting the "Hotwheels" nickname. Microsoft encouraged users to only use the steering wheel in battery mode rather than while plugged in. [ 61 ] That August a product recall was issued, with Microsoft retrofitting the existing steering wheels to remedy the problem. [ 62 ]
The Nyko Intercooler was a popular aftermarket cooler, purchased by users who wished to improve air flow in an attempt to avoid the red-ring issue. While the exact cause of red-ring was not yet public in the late 2000s, it was known that temperature was an issue. [ 63 ] [ 64 ] Unfortunately, the Nyko Intercooler itself had issues and its usage could cause the red-ring or damage the power DC input. [ 64 ] The Intercooler could also melt itself onto the 360, melt the powercord, or make itself extremely hard to remove. [ 65 ]
Microsoft stated that the peripheral drained too much power from the console (the Intercooler power cord was installed between the Xbox 360 power supply and the console itself), could cause faults to occur, and stated that consoles fitted with the peripheral would have their warranties voided. Nyko released an updated Intercooler that used its own power source, and claimed the problem no longer occurred, but this did not affect Microsoft's stance on the warranty. | https://en.wikipedia.org/wiki/Ring_of_Death |
In mathematics, the ring of modular forms associated to a subgroup Γ of the special linear group SL(2, Z ) is the graded ring generated by the modular forms of Γ . The study of rings of modular forms describes the algebraic structure of the space of modular forms.
Let Γ be a subgroup of SL(2, Z ) that is of finite index and let M k (Γ) be the vector space of modular forms of weight k . The ring of modular forms of Γ is the graded ring M ( Γ ) = ⨁ k ≥ 0 M k ( Γ ) {\textstyle M(\Gamma )=\bigoplus _{k\geq 0}M_{k}(\Gamma )} . [ 1 ]
The ring of modular forms of the full modular group SL(2, Z ) is freely generated by the Eisenstein series E 4 and E 6 . In other words, M k (Γ) is isomorphic as a C {\displaystyle \mathbb {C} } -algebra to C [ E 4 , E 6 ] {\displaystyle \mathbb {C} [E_{4},E_{6}]} , which is the polynomial ring of two variables over the complex numbers . [ 1 ]
The ring of modular forms is a graded Lie algebra since the Lie bracket [ f , g ] = k f g ′ − ℓ f ′ g {\displaystyle [f,g]=kfg'-\ell f'g} of modular forms f and g of respective weights k and ℓ is a modular form of weight k + ℓ + 2 . [ 1 ] A bracket can be defined for the n -th derivative of modular forms and such a bracket is called a Rankin–Cohen bracket . [ 1 ]
In 1973, Pierre Deligne and Michael Rapoport showed that the ring of modular forms M(Γ) is finitely generated when Γ is a congruence subgroup of SL(2, Z ) . [ 2 ]
In 2003, Lev Borisov and Paul Gunnells showed that the ring of modular forms M(Γ) is generated in weight at most 3 when Γ {\displaystyle \Gamma } is the congruence subgroup Γ 1 ( N ) {\displaystyle \Gamma _{1}(N)} of prime level N in SL(2, Z ) using the theory of toric modular forms . [ 3 ] In 2014, Nadim Rustom extended the result of Borisov and Gunnells for Γ 1 ( N ) {\displaystyle \Gamma _{1}(N)} to all levels N and also demonstrated that the ring of modular forms for the congruence subgroup Γ 0 ( N ) {\displaystyle \Gamma _{0}(N)} is generated in weight at most 6 for some levels N . [ 4 ]
In 2015, John Voight and David Zureick-Brown generalized these results: they proved that the graded ring of modular forms of even weight for any congruence subgroup Γ of SL(2, Z ) is generated in weight at most 6 with relations generated in weight at most 12. [ 5 ] Building on this work, in 2016, Aaron Landesman, Peter Ruhm, and Robin Zhang showed that the same bounds hold for the full ring (all weights), with the improved bounds of 5 and 10 when Γ has some nonzero odd weight modular form. [ 6 ]
A Fuchsian group Γ corresponds to the orbifold obtained from the quotient Γ ∖ H {\displaystyle \Gamma \backslash \mathbb {H} } of the upper half-plane H {\displaystyle \mathbb {H} } . By a stacky generalization of Riemann's existence theorem , there is a correspondence between the ring of modular forms of Γ and a particular section ring closely related to the canonical ring of a stacky curve . [ 5 ]
There is a general formula for the weights of generators and relations of rings of modular forms due to the work of Voight and Zureick-Brown and the work of Landesman, Ruhm, and Zhang.
Let e i {\displaystyle e_{i}} be the stabilizer orders of the stacky points of the stacky curve (equivalently, the cusps of the orbifold Γ ∖ H {\displaystyle \Gamma \backslash \mathbb {H} } ) associated to Γ . If Γ has no nonzero odd weight modular forms, then the ring of modular forms is generated in weight at most 6 max ( 1 , e 1 , e 2 , … , e r ) {\displaystyle 6\max(1,e_{1},e_{2},\ldots ,e_{r})} and has relations generated in weight at most 12 max ( 1 , e 1 , e 2 , … , e r ) {\displaystyle 12\max(1,e_{1},e_{2},\ldots ,e_{r})} . [ 5 ] If Γ has a nonzero odd weight modular form, then the ring of modular forms is generated in weight at most max ( 5 , e 1 , e 2 , … , e r ) {\displaystyle \max(5,e_{1},e_{2},\ldots ,e_{r})} and has relations generated in weight at most 2 max ( 5 , e 1 , e 2 , … , e r ) {\displaystyle 2\max(5,e_{1},e_{2},\ldots ,e_{r})} . [ 6 ]
In string theory and supersymmetric gauge theory , the algebraic structure of the ring of modular forms can be used to study the structure of the Higgs vacua of four-dimensional gauge theories with N = 1 supersymmetry . [ 7 ] The stabilizers of superpotentials in N = 4 supersymmetric Yang–Mills theory are rings of modular forms of the congruence subgroup Γ(2) of SL(2, Z ) . [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Ring_of_modular_forms |
In algebra and in particular in algebraic combinatorics , the ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric group .
The ring of symmetric functions can be given a coproduct and a bilinear form making it into a positive selfadjoint graded Hopf algebra that is both commutative and cocommutative.
The study of symmetric functions is based on that of symmetric polynomials. In a polynomial ring in some finite set of indeterminates, a polynomial is called symmetric if it stays the same whenever the indeterminates are permuted in any way. More formally, there is an action by ring automorphisms of the symmetric group S n on the polynomial ring in n indeterminates, where a permutation acts on a polynomial by simultaneously substituting each of the indeterminates for another according to the permutation used. The invariants for this action form the subring of symmetric polynomials. If the indeterminates are X 1 , ..., X n , then examples of such symmetric polynomials are
and
A somewhat more complicated example is X 1 3 X 2 X 3 + X 1 X 2 3 X 3 + X 1 X 2 X 3 3 + X 1 3 X 2 X 4 + X 1 X 2 3 X 4 + X 1 X 2 X 4 3 + ...
where the summation goes on to include all products of the third power of some variable and two other variables. There are many specific kinds of symmetric polynomials, such as elementary symmetric polynomials , power sum symmetric polynomials , monomial symmetric polynomials , complete homogeneous symmetric polynomials , and Schur polynomials .
Most relations between symmetric polynomials do not depend on the number n of indeterminates, other than that some polynomials in the relation might require n to be large enough in order to be defined. For instance the Newton's identity for the third power sum polynomial p 3 leads to
where the e i {\displaystyle e_{i}} denote elementary symmetric polynomials; this formula is valid for all natural numbers n , and the only notable dependency on it is that e k ( X 1 ,..., X n ) = 0 whenever n < k . One would like to write this as an identity
that does not depend on n at all, and this can be done in the ring of symmetric functions. In that ring there are nonzero elements e k for all integers k ≥ 1, and any element of the ring can be given by a polynomial expression in the elements e k .
A ring of symmetric functions can be defined over any commutative ring R , and will be denoted Λ R ; the basic case is for R = Z . The ring Λ R is in fact a graded R - algebra . There are two main constructions for it; the first one given below can be found in (Stanley, 1999), and the second is essentially the one given in (Macdonald, 1979).
The easiest (though somewhat heavy) construction starts with the ring of formal power series R [ [ X 1 , X 2 , . . . ] ] {\displaystyle R[[X_{1},X_{2},...]]} over R in infinitely ( countably ) many indeterminates; the elements of this power series ring are formal infinite sums of terms, each of which consists of a coefficient from R multiplied by a monomial , where each monomial is a product of finitely many finite powers of indeterminates. One defines Λ R as its subring consisting of those power series S that satisfy
Note that because of the second condition, power series are used here only to allow infinitely many terms of a fixed degree, rather than to sum terms of all possible degrees. Allowing this is necessary because an element that contains for instance a term X 1 should also contain a term X i for every i > 1 in order to be symmetric. Unlike the whole power series ring, the subring Λ R is graded by the total degree of monomials: due to condition 2, every element of Λ R is a finite sum of homogeneous elements of Λ R (which are themselves infinite sums of terms of equal degree). For every k ≥ 0, the element e k ∈ Λ R is defined as the formal sum of all products of k distinct indeterminates, which is clearly homogeneous of degree k .
Another construction of Λ R takes somewhat longer to describe, but better indicates the relationship with the rings R [ X 1 ,..., X n ] S n of symmetric polynomials in n indeterminates. For every n there is a surjective ring homomorphism ρ n from the analogous ring R [ X 1 ,..., X n +1 ] S n +1 with one more indeterminate onto R [ X 1 ,..., X n ] S n , defined by setting the last indeterminate X n +1 to 0. Although ρ n has a non-trivial kernel , the nonzero elements of that kernel have degree at least n + 1 {\displaystyle n+1} (they are multiples of X 1 X 2 ... X n +1 ). This means that the restriction of ρ n to elements of degree at most n is a bijective linear map , and ρ n ( e k ( X 1 ,..., X n +1 )) = e k ( X 1 ,..., X n ) for all k ≤ n . The inverse of this restriction can be extended uniquely to a ring homomorphism φ n from R [ X 1 ,..., X n ] S n to R [ X 1 ,..., X n +1 ] S n +1 , as follows for instance from the fundamental theorem of symmetric polynomials . Since the images φ n ( e k ( X 1 ,..., X n )) = e k ( X 1 ,..., X n +1 ) for k = 1,..., n are still algebraically independent over R , the homomorphism φ n is injective and can be viewed as a (somewhat unusual) inclusion of rings; applying φ n to a polynomial amounts to adding all monomials containing the new indeterminate obtained by symmetry from monomials already present. The ring Λ R is then the "union" ( direct limit ) of all these rings subject to these inclusions. Since all φ n are compatible with the grading by total degree of the rings involved, Λ R obtains the structure of a graded ring.
This construction differs slightly from the one in (Macdonald, 1979). That construction only uses the surjective morphisms ρ n without mentioning the injective morphisms φ n : it constructs the homogeneous components of Λ R separately, and equips their direct sum with a ring structure using the ρ n . It is also observed that the result can be described as an inverse limit in the category of graded rings. That description however somewhat obscures an important property typical for a direct limit of injective morphisms, namely that every individual element (symmetric function) is already faithfully represented in some object used in the limit construction, here a ring R [ X 1 ,..., X d ] S d . It suffices to take for d the degree of the symmetric function, since the part in degree d of that ring is mapped isomorphically to rings with more indeterminates by φ n for all n ≥ d . This implies that for studying relations between individual elements, there is no fundamental difference between symmetric polynomials and symmetric functions.
The name "symmetric function" for elements of Λ R is a misnomer : in neither construction are the elements functions , and in fact, unlike symmetric polynomials, no function of independent variables can be associated to such elements (for instance e 1 would be the sum of all infinitely many variables, which is not defined unless restrictions are imposed on the variables). However the name is traditional and well established; it can be found both in (Macdonald, 1979), which says (footnote on p. 12)
The elements of Λ (unlike those of Λ n ) are no longer polynomials: they are formal infinite sums of monomials. We have therefore reverted to the older terminology of symmetric functions.
(here Λ n denotes the ring of symmetric polynomials in n indeterminates), and also in (Stanley, 1999).
To define a symmetric function one must either indicate directly a power series as in the first construction, or give a symmetric polynomial in n indeterminates for every natural number n in a way compatible with the second construction. An expression in an unspecified number of indeterminates may do both, for instance
can be taken as the definition of an elementary symmetric function if the number of indeterminates is infinite, or as the definition of an elementary symmetric polynomial in any finite number of indeterminates. Symmetric polynomials for the same symmetric function should be compatible with the homomorphisms ρ n (decreasing the number of indeterminates is obtained by setting some of them to zero, so that the coefficients of any monomial in the remaining indeterminates is unchanged), and their degree should remain bounded. (An example of a family of symmetric polynomials that fails both conditions is ∏ i = 1 n X i {\displaystyle \textstyle \prod _{i=1}^{n}X_{i}} ; the family ∏ i = 1 n ( X i + 1 ) {\displaystyle \textstyle \prod _{i=1}^{n}(X_{i}+1)} fails only the second condition.) Any symmetric polynomial in n indeterminates can be used to construct a compatible family of symmetric polynomials, using the homomorphisms ρ i for i < n to decrease the number of indeterminates, and φ i for i ≥ n to increase the number of indeterminates (which amounts to adding all monomials in new indeterminates obtained by symmetry from monomials already present).
The following are fundamental examples of symmetric functions.
There is no power sum symmetric function p 0 : although it is possible (and in some contexts natural) to define p 0 ( X 1 , … , X n ) = ∑ i = 1 n X i 0 = n {\displaystyle \textstyle p_{0}(X_{1},\ldots ,X_{n})=\sum _{i=1}^{n}X_{i}^{0}=n} as a symmetric polynomial in n variables, these values are not compatible with the morphisms ρ n . The "discriminant" ( ∏ i < j ( X i − X j ) ) 2 {\displaystyle \textstyle (\prod _{i<j}(X_{i}-X_{j}))^{2}} is another example of an expression giving a symmetric polynomial for all n , but not defining any symmetric function. The expressions defining Schur polynomials as a quotient of alternating polynomials are somewhat similar to that for the discriminant, but the polynomials s λ ( X 1 ,..., X n ) turn out to be compatible for varying n , and therefore do define a symmetric function.
For any symmetric function P , the corresponding symmetric polynomials in n indeterminates for any natural number n may be designated by P ( X 1 ,..., X n ). The second definition of the ring of symmetric functions implies the following fundamental principle:
This is because one can always reduce the number of variables by substituting zero for some variables, and one can increase the number of variables by applying the homomorphisms φ n ; the definition of those homomorphisms assures that φ n ( P ( X 1 ,..., X n )) = P ( X 1 ,..., X n +1 ) (and similarly for Q ) whenever n ≥ d . See a proof of Newton's identities for an effective application of this principle.
The ring of symmetric functions is a convenient tool for writing identities between symmetric polynomials that are independent of the number of indeterminates: in Λ R there is no such number, yet by the above principle any identity in Λ R automatically gives identities the rings of symmetric polynomials over R in any number of indeterminates. Some fundamental identities are
which shows a symmetry between elementary and complete homogeneous symmetric functions; these relations are explained under complete homogeneous symmetric polynomial .
the Newton identities , which also have a variant for complete homogeneous symmetric functions:
Important properties of Λ R include the following.
Property 2 is the essence of the fundamental theorem of symmetric polynomials . It immediately implies some other properties:
This final point applies in particular to the family ( h i ) i >0 of complete homogeneous symmetric functions.
If R contains the field Q {\displaystyle \mathbb {Q} } of rational numbers , it applies also to the family ( p i ) i >0 of power sum symmetric functions. This explains why the first n elements of each of these families define sets of symmetric polynomials in n variables that are free polynomial generators of that ring of symmetric polynomials.
The fact that the complete homogeneous symmetric functions form a set of free polynomial generators of Λ R already shows the existence of an automorphism ω sending the elementary symmetric functions to the complete homogeneous ones, as mentioned in property 3. The fact that ω is an involution of Λ R follows from the symmetry between elementary and complete homogeneous symmetric functions expressed by the first set of relations given above.
The ring of symmetric functions Λ Z is the Exp ring of the integers Z . It is also a lambda-ring in a natural fashion; in fact it is the universal lambda-ring in one generator.
The first definition of Λ R as a subring of R [ [ X 1 , X 2 , . . . ] ] {\displaystyle R[[X_{1},X_{2},...]]} allows the generating functions of several sequences of symmetric functions to be elegantly expressed. Contrary to the relations mentioned earlier, which are internal to Λ R , these expressions involve operations taking place in R [[ X 1 , X 2 ,...; t ]] but outside its subring Λ R [[ t ]], so they are meaningful only if symmetric functions are viewed as formal power series in indeterminates X i . We shall write "( X )" after the symmetric functions to stress this interpretation.
The generating function for the elementary symmetric functions is
Similarly one has for complete homogeneous symmetric functions
The obvious fact that E ( − t ) H ( t ) = 1 = E ( t ) H ( − t ) {\displaystyle E(-t)H(t)=1=E(t)H(-t)} explains the symmetry between elementary and complete homogeneous symmetric functions.
The generating function for the power sum symmetric functions can be expressed as
((Macdonald, 1979) defines P ( t ) as Σ k >0 p k ( X ) t k −1 , and its expressions therefore lack a factor t with respect to those given here). The two final expressions, involving the formal derivatives of the generating functions E ( t ) and H ( t ), imply Newton's identities and their variants for the complete homogeneous symmetric functions. These expressions are sometimes written as
which amounts to the same, but requires that R contain the rational numbers, so that the logarithm of power series with constant term 1 is defined (by log ( 1 − t S ) = − ∑ i > 0 1 i ( t S ) i {\displaystyle \textstyle \log(1-tS)=-\sum _{i>0}{\frac {1}{i}}(tS)^{i}} ).
Let Λ {\displaystyle \Lambda } be the ring of symmetric functions and R {\displaystyle R} a commutative algebra with unit element. An algebra homomorphism φ : Λ → R , f ↦ f ( φ ) {\displaystyle \varphi :\Lambda \to R,\quad f\mapsto f(\varphi )} is called a specialization . [ 1 ]
Example: | https://en.wikipedia.org/wiki/Ring_of_symmetric_functions |
In biology , a ring species is a connected series of neighbouring populations, each of which interbreeds with closely sited related populations, but for which there exist at least two "end populations" in the series, which are too distantly related to interbreed, though there is a potential gene flow between each "linked" population and the next. [ 1 ] Such non-breeding, though genetically connected, "end populations" may co-exist in the same region ( sympatry ) thus closing a "ring". The German term Rassenkreis , meaning a circle of races, is also used.
Ring species represent speciation and have been cited as evidence of evolution . They illustrate what happens over time as populations genetically diverge, specifically because they represent, in living populations, what normally happens over time between long-deceased ancestor populations and living populations, in which the intermediates have become extinct . The evolutionary biologist Richard Dawkins remarks that ring species "are only showing us in the spatial dimension something that must always happen in the time dimension". [ 2 ]
Formally, the issue is that interfertility (ability to interbreed) is not a transitive relation ; if A breeds with B, and B breeds with C, it does not mean that A breeds with C, and therefore does not define an equivalence relation . A ring species is a species with a counterexample to the transitivity of interbreeding. [ 3 ] However, it is unclear whether any of the examples of ring species cited by scientists actually permit gene flow from end to end, with many being debated and contested. [ 4 ]
The classic ring species is the Larus gull. In 1925 Jonathan Dwight found the genus to form a chain of varieties around the Arctic Circle. However, doubts have arisen as to whether this represents an actual ring species. [ 5 ] In 1938, Claud Buchanan Ticehurst argued that the greenish warbler had spread from Nepal around the Tibetan Plateau, while adapting to each new environment, meeting again in Siberia where the ends no longer interbreed. [ 6 ] These and other discoveries led Mayr to first formulate a theory on ring species in his 1942 study Systematics and the Origin of Species . Also in the 1940s, Robert C. Stebbins described the Ensatina salamanders around the Californian Central Valley as a ring species; [ 7 ] [ 8 ] but again, some authors such as Jerry Coyne consider this classification incorrect. [ 4 ] Finally in 2012, the first example of a ring species in plants was found in a spurge , forming a ring around the Caribbean Sea. [ 9 ]
The biologist Ernst Mayr championed the concept of ring species, stating that it unequivocally demonstrated the process of speciation. [ 10 ] A ring species is an alternative model to allopatric speciation , "illustrating how new species can arise through 'circular overlap', without interruption of gene flow through intervening populations…" [ 11 ] However, Jerry Coyne and H. Allen Orr point out that rings species more closely model parapatric speciation . [ 4 ]
Ring species often attract the interests of evolutionary biologists, systematists, and researchers of speciation leading to both thought provoking ideas and confusion concerning their definition. [ 1 ] Contemporary scholars recognize that examples in nature have proved rare due to various factors such as limitations in taxonomic delineation [ 12 ] or, "taxonomic zeal" [ 10 ] —explained by the fact that taxonomists classify organisms into "species", while ring species often cannot fit this definition. [ 1 ] Other reasons such as gene flow interruption from "vicariate divergence" and fragmented populations due to climate instability have also been cited. [ 10 ]
Ring species also present an interesting case of the species problem for those seeking to divide the living world into discrete species . All that distinguishes a ring species from two separate species is the existence of the connecting populations; if enough of the connecting populations within the ring perish to sever the breeding connection then the ring species' distal populations will be recognized as two distinct species. The problem is whether to quantify the whole ring as a single species (despite the fact that not all individuals interbreed) or to classify each population as a distinct species (despite the fact that it interbreeds with its near neighbours). Ring species illustrate that species boundaries arise gradually and often exist on a continuum. [ 10 ]
Many examples have been documented in nature. Debate exists concerning much of the research, with some authors citing evidence against their existence entirely. [ 4 ] [ 13 ] [ self-published source? ] The following examples provide evidence that—despite the limited number of concrete, idealized examples in nature—continuums of species do exist and can be found in biological systems. [ 10 ] This is often characterized by sub-species level classifications such as clines, ecotypes , complexes , and varieties . Many examples have been disputed by researchers, and equally "many of the [proposed] cases have received very little attention from researchers, making it difficult to assess whether they display the characteristics of ideal ring species." [ 1 ]
The following list gives examples of ring species found in nature. Some of the examples such as the Larus gull complex, the greenish warbler of Asia, and the Ensatina salamanders of America, have been disputed. [ 13 ] [ 14 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Ring_species |
In organic chemistry , ring strain is a type of instability that exists when bonds in a molecule form angles that are abnormal. Strain is most commonly discussed for small rings such as cyclopropanes and cyclobutanes , whose internal angles are substantially smaller than the idealized value of approximately 109°. Because of their high strain, the heat of combustion for these small rings is elevated. [ 1 ] [ 2 ]
Ring strain results from a combination of angle strain , conformational strain or Pitzer strain (torsional eclipsing interactions), and transannular strain , also known as van der Waals strain or Prelog strain . The simplest examples of angle strain are small cycloalkanes such as cyclopropane and cyclobutane.
Ring strain energy can be attributed to the energy required for the distortion of bond and bond angles in order to close a ring. [ 3 ]
Ring strain energy is believed to be the cause of accelerated rates in altering ring reactions. Its interactions with traditional bond energies change the enthalpies of compounds effecting the kinetics and thermodynamics of ring strain reactions. [ 4 ]
Ring strain theory was first developed by German chemist Adolf von Bayer in 1890. Previously, the only bonds believed to exist were torsional and steric; however, Bayer's theory became based on the interactions between the two strains.
Bayer's theory was based on the assumption that ringed compounds were flat. Later, around the same time, Hermann Sachse formed his postulation that compound rings were not flat and potentially existed in a "chair" formation. Ernst Mohr later combined the two theories to explain the stability of six-membered rings and their frequency in nature, as well as the energy levels of other ring structures. [ 5 ]
In alkanes, optimum overlap of atomic orbitals is achieved at 109.5°. The most common cyclic compounds have five or six carbons in their ring. [ 6 ] Adolf von Baeyer received a Nobel Prize in 1905 for the discovery of the Baeyer strain theory, which was an explanation of the relative stabilities of cyclic molecules in 1885. [ 6 ]
Angle strain occurs when bond angles deviate from the ideal bond angles to achieve maximum bond strength in a specific chemical conformation . Angle strain typically affects cyclic molecules, which lack the flexibility of acyclic molecules.
Angle strain destabilizes a molecule, as manifested in higher reactivity and elevated heat of combustion . Maximum bond strength results from effective overlap of atomic orbitals in a chemical bond . A quantitative measure for angle strain is strain energy . Angle strain and torsional strain combine to create ring strain that affects cyclic molecules. [ 6 ]
Normalized energies that allow comparison of ring strains are obtained by measuring per methylene group (CH 2 ) of the molar heat of combustion in the cycloalkanes. [ 6 ]
The value 658.6 kJ per mole is obtained from an unstrained long-chain alkane. [ 6 ]
Cycloalkanes generally have less ring strain than cycloalkenes, which is seen when comparing cyclopropane and cyclopropene. [ 8 ]
Cyclic alkenes are subject to strain resulting from distortion of the sp 2 -hybridized carbon centers. Illustrative is C 60 where the carbon centres are pyramidalized. This distortion enhances the reactivity of this molecule. Angle strain also is the basis of Bredt's rule which dictates that bridgehead carbon centers are not incorporated in alkenes because the resulting alkene would be subject to extreme angle strain.
Small trans-cycloalkenes have so much ring strain they cannot exist for extended periods of time. [ 9 ] For instance, the smallest trans-cycloalkane that has been isolated is trans-cyclooctene . Trans-cycloheptene has been detected via spectrophotometry for minute time periods, and trans-cyclohexene is thought to be an intermediate in some reactions. No smaller trans-cycloalkenes are known. On the contrary, while small cis-cycloalkenes do have ring strain, they have much less ring strain than small trans-cycloalkenes. [ 9 ]
In general, the increased levels of unsaturation in alkenes leads to higher ring strain. Increasing unsaturation leads to greater ring strain in cyclopropene. [ 8 ] Therefore, cyclopropene is an alkene that has the most ring strain between the two mentioned. The differing hybridizations and geometries between cyclopropene and cyclopropane contribute to the increased ring strain. Cyclopropene also has an increased angle strain, which also contributes to the greater ring strain. However, this trend does not always work for every alkane and alkene. [ 8 ]
In some molecules, torsional strain can contribute to ring strain in addition to angle strain. One example of such a molecule is cyclopropane . Cyclopropane's carbon-carbon bonds form angles of 60°, far from the preferred angle of 109.5° angle in alkanes, so angle strain contributes most to cyclopropane's ring strain. [ 10 ] However, as shown in the Newman projection of the molecule, the hydrogen atoms are eclipsed, causing some torsional strain as well. [ 10 ]
In cycloalkanes, each carbon is bonded nonpolar covalently to two carbons and two hydrogen. The carbons have sp 3 hybridization and should have ideal bond angles of 109.5°. Due to the limitations of cyclic structure, however, the ideal angle is only achieved in a six carbon ring — cyclohexane in chair conformation . For other cycloalkanes, the bond angles deviate from ideal.
Molecules with a high amount of ring strain consist of three, four, and some five-membered rings, including: cyclopropanes , cyclopropenes , cyclobutanes , cyclobutenes , [1,1,1] propellanes , [2,2,2] propellanes , epoxides , aziridines , cyclopentenes , and norbornenes . These molecules have bond angles between ring atoms which are more acute than the optimal tetrahedral (109.5°) and trigonal planar (120°) bond angles required by their respective sp 3 and sp 2 bonds. Because of the smaller bond angles , the bonds have higher energy and adopt more p-character to reduce the energy of the bonds. In addition, the ring structures of cyclopropanes/enes and cyclclobutanes/enes offer very little conformational flexibility. Thus, the substituents of ring atoms exist in an eclipsed conformation in cyclopropanes and between gauche and eclipsed in cyclobutanes, contributing to higher ring strain energy in the form of van der Waals repulsion.
monocycles
Bicyclics [ 12 ]
Ring strain can be considerably higher in bicyclic systems . For example, bicyclobutane , C 4 H 6 , is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 63.9 kcal mol −1 (267 kJ mol −1 ). [ 13 ] [ 14 ]
Cyclopropane has a lesser amount of ring strain since it has the least amount of unsaturation; as a result, increasing the amount of unsaturation leads to greater ring strain. [ 8 ] For example, cyclopropene has a greater amount of ring strain than cyclopropane because it has more unsaturation.
The potential energy and unique bonding structure contained in the bonds of molecules with ring strain can be used to drive reactions in organic synthesis . Examples of such reactions are ring opening metathesis polymerisation , photo-induced ring opening of cyclobutenes , and nucleophilic ring-opening of epoxides and aziridines .
Increased potential energy from ring strain also can be used to increase the energy released by explosives or increase their shock sensitivity. [ 15 ] For example, the shock sensitivity of the explosive 1,3,3-Trinitroazetidine could partially or primarily explained by its ring strain. [ 15 ] | https://en.wikipedia.org/wiki/Ring_strain |
A ring system is a disc or torus orbiting an astronomical object that is composed of solid material such as dust , meteoroids , planetoids , moonlets , or stellar objects.
Ring systems are best known as planetary rings, common components of satellite systems around giant planets such as the rings of Saturn , or circumplanetary disks . But they can also be galactic rings and circumstellar discs , belts of planetoids, such as the asteroid belt or Kuiper belt , or rings of interplanetary dust , such as around the Sun at distances of Mercury , Venus , and Earth , in mean motion resonance with these planets. [ 1 ] [ 2 ] [ 3 ] Evidence suggests that ring systems may also be found around other types of astronomical objects, including moons and brown dwarfs .
In the Solar System , all four giant planets ( Jupiter , Saturn, Uranus , and Neptune ) have ring systems. Ring systems around minor planets have also been discovered via occultations . Some studies even theorize that the Earth may have had a ring system during the mid-late Ordovician period. [ 4 ]
There are three ways that thicker planetary rings have been proposed to have formed: from material originating from the protoplanetary disk that was within the Roche limit of the planet and thus could not coalesce to form moons, from the debris of a moon that was disrupted by a large impact, or from the debris of a moon that was disrupted by tidal stresses when it passed within the planet's Roche limit. Most rings were thought to be unstable and to dissipate over the course of tens or hundreds of millions of years, but it now appears that Saturn's rings might be quite old, dating to the early days of the Solar System. [ 5 ]
Fainter planetary rings can form as a result of meteoroid impacts with moons orbiting around the planet or, in the case of Saturn's E-ring, the ejecta of cryovolcanic material. [ 6 ] [ 7 ]
Ring systems may form around centaurs when they are tidally disrupted in a close encounter (within 0.4 to 0.8 times the Roche limit ) with a giant planet. For a differentiated body approaching a giant planet at an initial relative velocity of 3−6 km/s with an initial rotational period of 8 hours, a ring mass of 0.1%−10% of the centaur's mass is predicted. Ring formation from an undifferentiated body is less likely. The rings would be composed mostly or entirely of material from the parent body's icy mantle. After forming, the ring would spread laterally, leading to satellite formation from whatever portion of it spreads beyond the centaur's Roche Limit. Satellites could also form directly from the disrupted icy mantle. This formation mechanism predicts that roughly 10% of centaurs will have experienced potentially ring-forming encounters with giant planets. [ 8 ]
The composition of planetary ring particles varies, ranging from silicates to icy dust. Larger rocks and boulders may also be present, as seen in 2007 when tidal effects from eight moonlets only a few hundred meters across were detected within Saturn's rings. The maximum size of a ring particle is determined by the specific strength of the material it is made of, its density, and the tidal force at its altitude. The tidal force is proportional to the average density inside the radius of the ring, or to the mass of the planet divided by the radius of the ring cubed. It is also inversely proportional to the square of the orbital period of the ring.
Some planetary rings are influenced by shepherd moons , small moons that orbit near the inner or outer edges of a ringlet or within gaps in the rings. The gravity of shepherd moons serves to maintain a sharply defined edge to the ring; material that drifts closer to the shepherd moon's orbit is either deflected back into the body of the ring, ejected from the system, or accreted onto the moon itself.
It is also predicted that Phobos , a moon of Mars, will break up and form into a planetary ring in about 50 million years. Its low orbit, with an orbital period that is shorter than a Martian day, is decaying due to tidal deceleration . [ 9 ] [ 10 ]
Jupiter's ring system was the third to be discovered, when it was first observed by the Voyager 1 probe in 1979, [ 11 ] and was observed more thoroughly by the Galileo orbiter in the 1990s. [ 12 ] Its four main parts are a faint thick torus known as the "halo"; a thin, relatively bright main ring; and two wide, faint "gossamer rings". [ 13 ] The system consists mostly of dust. [ 11 ] [ 14 ]
Saturn's rings are the most extensive ring system of any planet in the Solar System, and thus have been known to exist for quite some time. Galileo Galilei first observed them in 1610, but they were not accurately described as a disk around Saturn until Christiaan Huygens did so in 1655. [ 15 ] The rings are not a series of tiny ringlets as many think, but are more of a disk with varying density. [ 16 ] They consist mostly of water ice and trace amounts of rock , and the particles range in size from micrometers to meters. [ 17 ]
Uranus's ring system lies between the level of complexity of Saturn's vast system and the simpler systems around Jupiter and Neptune. They were discovered in 1977 by James L. Elliot , Edward W. Dunham, and Jessica Mink . [ 18 ] In the time between then and 2005, observations by Voyager 2 [ 19 ] and the Hubble Space Telescope [ 20 ] led to a total of 13 distinct rings being identified, most of which are opaque and only a few kilometers wide. They are dark and likely consist of water ice and some radiation-processed organics . The relative lack of dust is due to aerodynamic drag from the extended exosphere - corona of Uranus.
The system around Neptune consists of five principal rings that, at their densest, are comparable to the low-density regions of Saturn's rings. However, they are faint and dusty, much more similar in structure to those of Jupiter. The very dark material that makes up the rings is likely organics processed by radiation , like in the rings of Uranus. [ 21 ] 20 to 70 percent of the rings are dust , a relatively high proportion. [ 21 ] Hints of the rings were seen for decades prior to their conclusive discovery by Voyager 2 in 1989.
A 2024 study suggests that Earth may have had a ring system for a period of 40 million years, starting from the middle of the Ordovician period (around 466 million years ago). This ring system may have originated from a large asteroid that passed by Earth at this time and had a significant amount of debris stripped by Earth's gravitational pull, forming a ring system. Evidence for this ring comes from impact craters from the Ordovician meteor event appearing to cluster in a distinctive band around the Earth's equator at that time. The presence of this ring may have led to significant shielding of Earth from sun's rays and a severe cooling event, thus causing the Hirnantian glaciation , the coldest known period of the last 450 million years. [ 4 ]
Reports in March 2008 suggested that Saturn's moon Rhea may have its own tenuous ring system , which would make it the only moon known to have a ring system. [ 22 ] [ 23 ] [ 24 ] A later study published in 2010 revealed that imaging of Rhea by the Cassini spacecraft was inconsistent with the predicted properties of the rings, suggesting that some other mechanism is responsible for the magnetic effects that had led to the ring hypothesis. [ 25 ]
Prior to the arrival of New Horizons , some astronomers hypothesized that Pluto and Charon might have a circumbinary ring system created from dust ejected off of Pluto's small outer moons in impacts. A dust ring would have posed a considerable risk to the New Horizons spacecraft. [ 26 ] However, this possibility was ruled out when New Horizons failed to detect any dust rings around Pluto.
10199 Chariklo , a centaur , was the first minor planet discovered to have rings. It has two rings , perhaps due to a collision that caused a chain of debris to orbit it. The rings were discovered when astronomers observed Chariklo passing in front of the star UCAC4 248-108672 on June 3, 2013 from seven locations in South America. While watching, they saw two dips in the star's apparent brightness just before and after the occultation. Because this event was observed at multiple locations, the conclusion that the dip in brightness was in fact due to rings is unanimously the leading hypothesis. The observations revealed what is likely a 19-kilometer (12-mile)-wide ring system that is about 1,000 times closer than the Moon is to Earth. In addition, astronomers suspect there could be a moon orbiting amidst the ring debris. If these rings are the leftovers of a collision as astronomers suspect, this would give fodder to the idea that moons (such as the Moon) form through collisions of smaller bits of material. Chariklo's rings have not been officially named, but the discoverers have nicknamed them Oiapoque and Chuí, after two rivers near the northern and southern ends of Brazil. [ 27 ]
A second centaur, 2060 Chiron , has a constantly evolving disk of rings. [ 28 ] [ 29 ] [ 30 ] Based on stellar-occultation data that were initially interpreted as resulting from jets associated with Chiron's comet-like activity, the rings are proposed to be 324 ± 10 km in radius, though their evolution does change the radius somewhat. Their changing appearance at different viewing angles can explain the long-term variation in Chiron's brightness over time. [ 29 ] Chiron's rings are suspected to be maintained by orbiting material ejected during seasonal outbursts, as a third partial ring detected in 2018 had become a full ring by 2022, with an outburst in between in 2021. [ 31 ]
A ring around Haumea , a dwarf planet and resonant Kuiper belt member , was revealed by a stellar occultation observed on 21 January 2017. This makes it the first trans-Neptunian object found to have a ring system. [ 32 ] [ 33 ] The ring has a radius of about 2,287 km , a width of ≈ 70 km and an opacity of 0.5. [ 33 ] The ring plane coincides with Haumea's equator and the orbit of its larger, outer moon Hi’iaka [ 33 ] (which has a semimajor axis of ≈ 25,657 km ). The ring is close to the 3:1 resonance with Haumea's rotation, which is located at a radius of 2,285 ± 8 km . [ 33 ] It is well within Haumea's Roche limit , which would lie at a radius of about 4,400 km if Haumea were spherical (being nonspherical pushes the limit out farther). [ 33 ]
In 2023, astronomers announced the discovery of a widely separated ring around the dwarf planet and Kuiper belt object Quaoar . [ 34 ] [ 35 ] Further analysis of the occultation data uncovered a second inner, fainter ring. [ 36 ]
Both rings display unusual properties. The outer ring orbits at a distance of 4,057 ± 6 km , approximately 7.5 times the radius of Quaoar and more than double the distance of its Roche limit. The inner ring orbits at a distance of 2,520 ± 20 km , approximately 4.6 times the radius of Quaoar and also beyond its Roche limit. [ 36 ] The outer ring appears to be inhomogeneous, containing a thin, dense section as well as a broader, more diffuse section. [ 35 ]
Because all giant planets of the Solar System have rings, the existence of exoplanets with rings is plausible. Although particles of ice , the material that is predominant in the rings of Saturn , can only exist around planets beyond the frost line , within this line rings consisting of rocky material can be stable in the long term. [ 37 ] Such ring systems can be detected for planets observed by the transit method by additional reduction of the light of the central star if their opacity is sufficient. As of 2024, two candidate extrasolar ring systems have been found by this method, around HIP 41378 f [ 38 ] and K2-33b . [ 39 ]
Fomalhaut b was found to be large and unclearly defined when detected in 2008. This was hypothesized to either be due to a cloud of dust attracted from the dust disc of the star, or a possible ring system, [ 40 ] though in 2020 Fomalhaut b itself was determined to very likely be an expanding debris cloud from a collision of asteroids rather than a planet. [ 41 ] Similarly, Proxima Centauri c has been observed to be far brighter than expected for its low mass of 7 Earth masses, which may be attributed to a ring system of about 5 R J . [ 42 ]
A 56-day-long sequence of dimming events in the star V1400 Centauri observed in 2007 was interpreted as a substellar object with a circumstellar disk or massive rings transiting the star. [ 43 ] This substellar object, dubbed " J1407b ", is most likely a free-floating brown dwarf or rogue planet several times the mass of Jupiter. [ 44 ] The circumstellar disk or ring system of J1407b is about 0.6 astronomical units (90,000,000 km; 56,000,000 mi) in radius. [ 43 ] J1407b's transit of V1400 Centauri revealed gaps and density variations within its disk or ring system, which has been interpreted as hints of exomoons or exoplanets forming around J1407b. [ 43 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Ring_system |
Ring vaccination is a strategy to inhibit the spread of a disease by vaccinating those who are most likely to be infected. [ 1 ]
This strategy vaccinates the contacts of confirmed patients, and people who are in close contact with those contacts. This way, everyone who has been, or could have been, exposed to a patient receives the vaccine, creating a 'ring' of protection that can limit the spread of a pathogen.
Ring vaccination requires thorough and rapid surveillance and epidemiologic case investigation. The Intensified Smallpox Eradication Program used this strategy with great success in its efforts to eradicate smallpox in the latter half of the 20th century. [ 2 ]
When someone falls ill, people they might have infected should be vaccinated . Contacts who might have been infected typically include family, neighbours, and friends. Several layers of contacts may be vaccinated (the contacts, the contacts' contacts, the contact's contacts' contacts, etc.). [ 3 ]
Ring vaccination relies on contact tracing to determine possible infections. However, this can be difficult. In some cases, it is preferable to vaccinate as many people as possible within the geographic area of known infection (geographically-targeted reactive vaccination). If the infections occur within a defined geographic boundary, it may be preferable to vaccinate the entire community in which the illness has appeared, rather than explicitly tracing contacts. [ 4 ]
Many vaccines take several weeks to induce immunity, and thus do not provide immediate protection. [ 5 ] However, even if some of the ill person's contacts are already infected, ring vaccination can prevent the virus from being transmitted again, to the ill contacts' contacts. [ medical citation needed ] A few vaccines can protect even if they are given just after infection; ring vaccination is somewhat more effective for vaccines providing this post-exposure prophylaxis . [ 4 ]
When responding to a possible outbreak , health officials should consider which is best, ring vaccination or mass vaccination . In some outbreaks, it might be better to only vaccinate those directly exposed; variable factors (such as demographics and the vaccine that is available) can make one method or the other safer, with fewer people experiencing side-effects when the same number are protected from the disease. [ 6 ]
Ring vaccination was used in Leicester , England in the late 19th-century. [ 7 ] It was also used in the mid-20th century in the eradication of smallpox . [ 8 ] [ 9 ]
It was used experimentally in the Ebola virus epidemic in West Africa . [ 10 ] [ 11 ]
In 2018, health authorities used a ring vaccination strategy to try to suppress the 2018 Équateur province Ebola outbreak . This involved vaccinating only those most likely to be infected; direct contacts of infected individuals, and contacts of those contacts. The vaccine used was rVSV-ZEBOV . [ 12 ]
Ring vaccination has been used extensively in the 2018 Kivu Ebola outbreak , with over 90,000 people vaccinated. In April 2019, the WHO published the preliminary results of the research by its research, in association with the DRC's Institut National pour la Recherche Biomedicale , into the effectiveness of the ring vaccination program, stating that the rVSV-ZEBOV-GP vaccine had been 97.5% effective at stopping Ebola transmission, relative to no vaccination. [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Ring_vaccination |
In telecommunications , the term ringaround has the following meanings:
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ringaround |
In mathematics, a Ringel–Hall algebra is a generalization of the Hall algebra , studied by Claus Michael Ringel ( 1990 ). It has a basis of equivalence classes of objects of an abelian category , and the structure constants for this basis are related to the numbers of extensions of objects in the category. | https://en.wikipedia.org/wiki/Ringel–Hall_algebra |
Ringing out is a process in audio engineering technique used to prevent audio feedback between on-stage microphones and loudspeakers, and to maximize gain before feedback . Depending on the acoustics of a venue, certain frequencies may be resonant and thus will be more prone to feedback.
To ring out a room, a sound technician will raise the gain or fader controls on a mixing desk to induce an audio system to feedback. Once feedback occurs, the technician uses an equalizer, usually a graphic equalizer to reduce the gain at the frequency of the feedback. The frequency of the feedback can be identified using a spectrum analyzer . This is repeated until feedback is sufficiently reduced without compromising the quality of the sound.
Ringing out is particularly important for the stage monitor system . While the performer or musician is usually behind the main PA system, the monitors are so they can hear themselves. As such, a microphone is much more likely to feedback through the monitor loudspeakers than the main PA.
Ringing out can become quite complex when working with a large number of microphones and monitors. Indeed, with larger touring acts, one of the major advantages of using in-ear monitors is the minimal ringing out that needs to be done.
Hardware exists that can perform many of the same functions that ringing out provides, such as feedback suppression and room optimization .
This sound technology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ringing_out |
The rings of Chariklo are a set of two narrow rings around the minor planet 10199 Chariklo . Chariklo, with a diameter of about 250 kilometres (160 mi), is the second-smallest celestial object with confirmed rings (with 2060 Chiron being the smallest [ 1 ] ) and the fifth ringed celestial object discovered in the Solar System , after the gas giants and ice giants . [ 2 ] Orbiting Chariklo is a bright ring system consisting of two narrow and dense bands, 6–7 km (4 mi) and 2–4 km (2 mi) wide, separated by a gap of 9 kilometres (6 mi). [ 2 ] [ 3 ] The rings orbit at distances of about 400 kilometres (250 mi) from the centre of Chariklo, a thousandth the distance between Earth and the Moon . The discovery was made by a team of astronomers using ten telescopes at various locations in Argentina, Brazil, Chile and Uruguay in South America during observation of a stellar occultation on 3 June 2013, and was announced on 26 March 2014. [ 2 ]
The existence of a ring system around a minor planet was unexpected because it had been thought that rings could only be stable around much more massive bodies. Ring systems around minor bodies had not previously been discovered despite the search for them through direct imaging and stellar occultation techniques. [ 2 ] Chariklo's rings should disperse over a period of at most a few million years, so either they are very young, or they are actively contained by shepherd moons with a mass comparable to that of the rings. [ 2 ] [ 4 ] [ 5 ] The team nicknamed the rings Oiapoque (the inner, more substantial ring) and Chuí (the outer ring), after the two rivers that form the northern and southern coastal borders of Brazil. A request for formal names will be submitted to the IAU at a later date. [ 4 ]
Chariklo is the largest confirmed member of a class of small bodies known as centaurs, which orbit the Sun between Saturn and Uranus in the outer Solar System . Forecasts had shown that, as seen from South America, it would pass in front of the 12.4-magnitude star UCAC4 248-108672 , located in the constellation Scorpius , on 3 June 2013. [ 6 ]
With the aid of thirteen telescopes located in Argentina, Brazil, Chile, and Uruguay, [ 7 ] a team of astronomers led by Felipe Braga Ribas ( cite ), a post-doctoral astronomer of the National Observatory (ON), in Rio de Janeiro, [ 7 ] and 65 other researchers from 34 institutions in 12 countries, [ 2 ] was able to observe this occultation event, a phenomenon during which a star disappears behind its occulting body. [ 2 ] The 1.54-metre Danish National Telescope at La Silla Observatory , due to the much faster data acquisition rate of its ' Lucky Imager ' camera (10 Hz), was the only telescope able to resolve the individual rings. [ 2 ]
During this event, the observed brightness was predicted to dip from magnitude 14.7 (star + Chariklo) to 18.5 (Chariklo alone) for at most 19.2 seconds. [ 8 ] This increase of 3.8 magnitudes is equivalent to a decrease in brightness by a factor 32.5. The primary occultation event was accompanied by four additional small decreases in the overall intensity of the light curve , which were observed seven seconds before the beginning of the occultation and seven seconds after the end of the occultation. [ 2 ] These secondary occultations indicated that something was partially blocking the light of the background star. The symmetry of the secondary occultations and multiple observations of the event in various locations helped reconstruct not only the shape and size of the object, but also the thickness, orientation, and location of the ring planes. [ 9 ] The relatively consistent ring properties inferred from several secondary occultation observations discredit alternative explanations for these features, such as cometary-like outgassing. [ 2 ]
Telescopes that observed the occultation included the Danish National Telescope and the survey telescope TRAPPIST of La Silla Observatory , the PROMPT Telescopes ( Cerro Tololo Inter-American Observatory ), the Brazilian Southern Astrophysical Research Telescope or SOAR ( Cerro Pachón ), the 0.45-metre ASH telescope ( Cerro Burek ), and those of the State University of Ponta Grossa Observatory, the Polo Astronomical Pole Casimiro Montenegro Filho (at the Itaipu Technological Park Foundation , in Foz do Iguaçu ), the Universidad Católica Observatory of the Pontifical Catholic University of Chile ( Santa Martina, Chile ) and several at Estación Astrofísica de Bosque Alegre , operated by the National University of Córdoba . Negative detections were recorded by El Catalejo Observatory ( Santa Rosa, La Pampa , Argentina), the 20-inch Planewave telescope (part of the Searchlight Observatory Network ) at San Pedro de Atacama , Chile and the OALM instrument at Los Molinos Astronomical Observatory in Uruguay. Some of the other participating instruments were those at the National Observatory in Rio de Janeiro, the Valongo Observatory (at the Federal University of Rio de Janeiro ), the Oeste do Paraná State University Observatory or Unioeste (in the state of Paraná ), the Pico dos Dias Observatory or OPL (in Minas Gerais ) and the São Paulo State University (UNESP – Guaratinguetá) in São Paulo. [ 2 ] [ 7 ] [ 10 ]
On 18 October 2022, the NIRCam instrument onboard the James Webb Space Telescope (JWST) was used to observe the occultation of the star Gaia DR3 6873519665992128512 by Chariklo's rings, capturing the characteristic dual decrease in the star's brightness as the rings obscured the starlight at two points. [ 11 ]
The orientation of the rings is consistent with an edge-on view from Earth in 2008, explaining the observed dimming of Chariklo between 1997 and 2008 by a factor of 1.75, as well as the gradual disappearance of water ice and other materials from its spectrum as the observed surface area of the rings decreased. [ 12 ] Also consistent with this edge-on orientation is that since 2008, the Chariklo system has increased in brightness by a factor of 1.5 again, and the infrared water-ice spectral features have reappeared. This suggests that the rings are composed at least partially of water ice. An icy ring composition is also consistent with the expected density of a disrupted body within Chariklo's Roche limit . [ 2 ]
The equivalent depth (a parameter related to the total amount of material contained in the ring based on the viewing geometry) of C1R was observed to vary by 21% over the course of the observation. Similar asymmetries have been observed during occultation observations of Uranus's narrow rings, and may be due to resonant oscillations responsible for modulating the width and optical depth of the rings. The column density of C1R is estimated to be 30–100 g/cm 2 . [ 2 ]
C2R is half the width of the brighter ring, and resides just outside it, at 404.8 kilometres (251.5 mi). With an optical depth of about 0.06, it is markedly more diffuse than its companion. [ 13 ] Altogether, it has approximately a twelfth of the mass of C1R. [ 2 ]
The origin of the rings is unknown, but both are likely to be remnants of a debris disk, which could have formed via an impact on Chariklo, a collision with or between one or more pre-existing moons, tidal disruption of a former retrograde moon, or from material released from the surface by cometary activity or rotational disruption. [ 2 ] If the rings formed through an impact event with Chariklo, the object must have impacted at a low velocity to prevent ring particles from being ejected beyond Chariklo's Hill sphere .
Impact velocities in the outer Solar System are typically ≈ 1 km/s (compared with the escape velocity at the surface of Chariklo of ≈ 0.1 km/s), and were even lower before the Kuiper belt was dynamically excited, supporting the possibility that the rings formed in the Kuiper belt before Chariklo was transferred to its current orbit less than 10 Myr ago. [ 2 ] Impact velocities in the asteroid belt are much higher (≈ 5 km/s), which could explain the absence of such ring features in minor bodies within the asteroid belt. [ 2 ] Collisions between ring particles would cause the ring to widen substantially, and Poynting–Robertson drag would cause the ring particles to fall onto the central body within a few million years, requiring either an active source of ring particles or dynamical confinement by small (kilometre-sized) embedded or shepherd moons yet to be discovered. [ 2 ] Such moons would be very challenging to detect via direct imaging from Earth due to the small radial separation of the ring system and Chariklo. [ 2 ]
As the smallest known celestial body with its own ring system, Chariklo and its rings are the first to have been fully simulated by numerically solving the N-body problem . [ 14 ] The assumptions made included the planetoid and ring particles being spherical, and all particles having equal radii between 2.5 and 10 m. Depending on parameters, the simulations [ clarification needed ] involved between 21 million and 345 million particles interacting with each other through gravity and collisions . The goal of the simulations was to assess under what conditions the rings remain stable; that is, do not cluster into few bigger bodies.
The first conclusion coming from the simulations is that the density of Chariklo has to be bigger than that of the ring matter, just in order to maintain them in orbit. Secondarily, for all tested ring particle radii and ring spatial densities, the rings did cluster in relatively short time scales. The authors suggest three main explanations:
They additionally noted that the effects of some of the assumptions, for instance complete absence of eccentricity of the rings, have not been evaluated. [ 14 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Rings_of_Chariklo |
The rings of Earth are a proposed set of planetary rings that may have at one point been present around Earth during the Ordovician period. These rings may have formed during the Ordovician impact spike approximately 466 million years ago. [ 1 ] [ 2 ] [ 3 ] They were first formally proposed by a team of scientists working with the Monash University in September 2024, and have been a subject of interest for several years prior to the study.
The Ordovician Period was the geologic period and system that the Earth was in when the rings are believed to have formed. The Ordovician spanned from 486.85 million years ago to 443.1 million years ago. During this period, an event known as the Ordovician meteor event occurred, when a high level of L chondrite meteorites hit Earth. The meteorites may have been caused by a large parent body that was 93 miles (150 km) in diameter. [ 4 ]
The parent body that produced the L chondrite meteorites is believed to have passed Earth's Roche limit , leading to the body being torn apart and its debris being scattered around, which eventually led to the formation of a debris ring. [ 5 ] [ 6 ]
The rings are believed to have been present approximately 466 million years ago. [ 1 ] [ 7 ] [ 8 ] The Hirnantian glaciation may be a direct result of the rings shielding light from reaching the Earth, [ 9 ] and the rings may have existed for up to 40 million years. [ 9 ]
The ring was first formally proposed after 21 impact craters from the meteor event were found to be located along a straight band around the Earth's equator . [ 10 ] [ 11 ] Andrew G. Tomkins, [ 9 ] Erin L. Martin [ 9 ] and Peter A. Cawood, [ 9 ] working with Monash University , released a study in September 2024 that gave evidence on the existence of the rings.
The study noted that all 21 craters produced as a result of the meteor event fell within an equatorial band range of ≤30°, despite the fact that ~70% of the Earth has a crust suitable for the preservation of craters. The study also noted that the chances of all 21 craters falling within the 30° range was one in 25 million, and would be highly unlikely unless the craters were caused by a dissolved ring system. [ 9 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Rings_of_Earth |
Rhea , the second-largest moon of Saturn , may have a tenuous ring system consisting of three narrow, relatively dense bands within a particulate disk. This would be the first discovery of rings around a moon . The potential discovery was announced in the journal Science on March 6, 2008. [ 2 ]
In November 2005 the Cassini orbiter found that Saturn's magnetosphere is depleted of energetic electrons near Rhea. [ 3 ] According to the discovery team, the pattern of depletion is best explained by assuming the electrons are absorbed by solid material in the form of an equatorial disk of particles perhaps several decimeters to approximately a meter in diameter and that contains several denser rings or arcs. Subsequent targeted optical searches of the putative ring plane from several angles by Cassini' s narrow-angle camera failed to find any evidence of the expected ring material, and in August 2010 it was announced that Rhea was unlikely to have rings, [ 4 ] and that the reason for the depletion pattern, which is unique to Rhea, is unknown. [ 5 ] [ 6 ] However, an equatorial chain of bluish marks on the Rhean surface suggests past impacts of deorbiting ring material and leaves the question unresolved. [ 7 ]
Voyager 1 observed a broad depletion of energetic electrons trapped in Saturn's magnetic field downstream from Rhea in 1980. These measurements, which were never explained, were made at a greater distance than the Cassini data.
On November 26, 2005, Cassini made the one targeted Rhea flyby of its primary mission. It passed within 500 km of Rhea's surface, downstream of Saturn's magnetic field, and observed the resulting plasma wake as it had with other moons, such as Dione and Tethys . In those cases, there was an abrupt cutoff of energetic electrons as Cassini crossed into the moons' plasma shadows (the regions where the moons themselves blocked the magnetospheric plasma from reaching Cassini). [ 2 ] [ 8 ] However, in the case of Rhea, the electron plasma started to drop off slightly at eight times that distance, and decreased gradually until the expected sharp drop off as Cassini entered Rhea's plasma shadow. The extended distance corresponds to Rhea's Hill sphere , the distance of 7.7 times Rhea's radius inside of which orbits are dominated by Rhea's rather than Saturn's gravity. When Cassini emerged from Rhea's plasma shadow, the reverse pattern occurred: A sharp surge in energetic electrons, then a gradual increase out to Rhea's Hill-sphere radius.
These readings are similar to those of Enceladus , where water venting from its south pole absorbs the electron plasma. However, in the case of Rhea, the absorption pattern is symmetrical. In addition, the Magnetospheric Imaging Instrument (MIMI) observed that this gentle gradient was punctuated by three sharp drops in plasma flow on each side of the moon, a pattern that was also nearly symmetrical. [ 2 ] [ 8 ]
In August 2007, Cassini passed through Rhea's plasma shadow again, but further downstream. Its readings were similar to those of Voyager 1. Two years later, in October 2009, it was announced that a set of small ultraviolet-bright spots distributed in a line that extends three quarters of the way around Rhea's circumference, within 2 degrees of the equator, may represent further evidence for a ring. The spots presumably represent the impact points of deorbiting ring material. [ 9 ]
There are no images or direct observations of the material thought to be absorbing the plasma, but the likely candidates would be difficult to detect directly. Further observations during Cassini' s targeted flyby on March 2, 2010 [ 8 ] found no evidence of orbiting ring material. [ 4 ]
Cassini' s flyby trajectory makes interpretation of the magnetic readings difficult.
The obvious candidates for magnetospheric plasma-absorbing matter are neutral gas and dust, but the quantities required to explain the observed depletion are far greater than Cassini' s measurements allow. Therefore the discoverers, led by Geraint Jones of the Cassini MIMI team, argue that the depletions must be caused by solid particles orbiting Rhea:
"An analysis of the electron data indicates that this obstacle is most likely in the form of a low optical depth disk of material near Rhea’s equatorial plane and that the disk contains solid bodies up to ~1 m in size." [ 2 ]
The simplest explanation for the symmetrical punctuations in plasma flow are "extended arcs or rings of material" orbiting Rhea in its equatorial plane. These symmetric dips bear some similarity to the method by which the rings of Uranus were discovered in 1977. [ 10 ] The slight deviations from absolute symmetry may be due to "a modest tilt to the local magnetic field" or "common plasma flow deviations" rather than to asymmetry of the rings themselves, which may be circular.
Not all scientists are convinced that the observed signatures are caused by a ring system. No rings have been seen in images, which puts a very low limit on dust-sized particles. Furthermore, a ring of boulders would be expected to generate dust that would likely have been seen in the images. [ 11 ]
Simulations suggest that solid bodies can stably orbit Rhea near its equatorial plane over astronomical timescales. They may not be stable around Dione and Tethys because those moons are far nearer Saturn, and therefore have far smaller Hill spheres , or around Titan because of drag from its dense atmosphere. [ 2 ]
Several suggestions have been made for the possible origin of rings. An impact could have ejected material into orbit; this could have happened as recently as 70 million years ago. A small body could have been disrupted when caught in orbit about Rhea. In either case, the debris would eventually have settled into circular equatorial orbits. Given the possibility of long-term orbital stability, however, it is possible that they survive from the formation of Rhea. [ 2 ]
For discrete rings to persist, something must confine them. Suggestions include moonlets or clumps of material within the disk, similar to those observed within Saturn's A ring . [ 2 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Rings_of_Rhea |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.