text
stringlengths
11
320k
source
stringlengths
26
161
A radar speed gun , also known as a radar gun , speed gun , or speed trap gun , is a device used to measure the speed of moving objects. It is commonly used by police to check the speed of moving vehicles while conducting traffic enforcement , and in professional sports to measure speeds such as those of baseball pitches , [ 1 ] tennis serves , and cricket bowls . [ 2 ] A radar speed gun is a Doppler radar unit that may be handheld, vehicle-mounted, or static. It measures the speed of the objects at which it is pointed by detecting a change in frequency of the returned radar signal caused by the Doppler effect , whereby the frequency of the returned signal is increased in proportion to the object's speed of approach if the object is approaching, and lowered if the object is receding. [ 3 ] Such devices are frequently used for speed limit enforcement , although more modern LIDAR speed gun instruments, which use pulsed laser light instead of radar, began to replace radar guns during the first decade of the twenty-first century, because of limitations associated with small radar systems. The radar speed gun was invented by John L. Barker Sr., and Ben Midlock, who developed radar for the military while working for the Automatic Signal Company (later Automatic Signal Division of LFE Corporation) in Norwalk, Connecticut during World War II . Originally, Automatic Signal was approached by Grumman to solve the specific problem of terrestrial landing gear damage on the Consolidated PBY Catalina amphibious aircraft. Barker and Midlock cobbled a Doppler radar unit from coffee cans soldered shut to make microwave resonators. The unit was installed at the end of the runway at Grumman's Bethpage, New York facility, and aimed directly upward to measure the sink rate of landing PBYs. After the war, Barker and Midlock tested radar on the Merritt Parkway . [ 4 ] In 1947, the system was tested by the Connecticut State Police in Glastonbury, Connecticut , initially for traffic surveys and issuing warnings to drivers for excessive speed. Starting in February 1949, the state police began to issue speeding tickets based on the speed recorded by the radar device. [ 5 ] In 1948, radar was also used in Garden City, New York . [ 6 ] Radar speed guns use Doppler radar to perform speed measurements. Radar speed guns, like other types of radar, consist of a radio transmitter and receiver . They send out a radio signal in a narrow beam, then receive the same signal back after it bounces off the target object. Due to a phenomenon called the Doppler effect , if the object is moving toward or away from the gun, the frequency of the reflected radio waves when they come back is different from the transmitted waves. When the object is approaching the radar, the frequency of the return waves is higher than the transmitted waves; when the object is moving away, the frequency is lower. From that difference, the radar speed gun can calculate the speed of the object from which the waves have been bounced. This speed is given by the following equation: where c {\displaystyle c} is the speed of light , f {\displaystyle f} is the emitted frequency of the radio waves, and Δ f {\displaystyle \Delta f} is the difference in frequency between the radio waves that are emitted and those received back by the gun. This equation holds precisely only when object speeds are low compared to that of light, but in everyday situations, this is the case and the velocity of an object is directly proportional to this difference in frequency. After the returning waves are received, a signal with a frequency equal to this difference is created by mixing the received radio signal with a little of the transmitted signal. Just as two different musical notes played together create a beat note at the difference in frequency between them, so when these two radio signals are mixed they create a "beat" signal (called a heterodyne ). An electrical circuit then measures this frequency using a digital counter to count the number of cycles in a fixed time period, and displays the number on a digital display as the object's speed. Since this type of speed gun measures the difference in speed between a target and the gun itself, the gun must be stationary in order to give a correct reading. If a measurement is made from a moving car, it will give the difference in speed between the two vehicles, not the speed of the target relative to the road, so a different system has been designed to work from moving vehicles. In "moving radar", the radar antenna receives reflected signals from both the target vehicle and stationary background objects such as the road surface, nearby road signs, guard rails and streetlight poles. Instead of comparing the frequency of the signal reflected from the target with the transmitted signal, it compares the target signal with this background signal. The frequency difference between these two signals gives the true speed of the target vehicle. Modern radar speed guns normally operate at X , K , K a , and (in Europe) K u bands. Radar guns that operate using the X band (8 to 12 GHz) frequency range are becoming less common because they produce a strong and easily detectable beam. Also, most automatic doors utilize radio waves in the X band range and can possibly affect the readings of police radar. As a result, K band (18 to 27 GHz) and K a band (27 to 40 GHz) are most commonly used by police agencies. Some motorists install radar detectors which can alert them to the presence of a speed trap ahead, and the microwave signals from radar may also change the quality of reception of AM and FM radio signals when tuned to a weak station. For these reasons, hand-held radar typically includes an on-off trigger and the radar is only turned on when the operator is about to make a measurement. Radar detectors are illegal in some areas. [ 7 ] [ 8 ] Traffic radar comes in many models. Hand-held units are mostly battery powered, and for the most part are used as stationary speed enforcement tools. Stationary radar can be mounted in police vehicles and may have one or two antennae. Moving radar is employed, as the name implies, when a police vehicle is in motion and can be very sophisticated, able to track vehicles approaching and receding, both in front of and behind the patrol vehicle and also able to track multiple targets at once. It can also track the fastest vehicle in the selected radar beam, front or rear. However, there are a number of limitations to the use of radar speed guns. For example, user training and certification are required so that a radar operator can use the equipment effectively, [ 9 ] with trainees being required to consistently visually estimate vehicle speed within +/-2 mph of actual target speed, for example if the target's actual speed is 30 mph then the operator must be able to consistently visually estimate the target speed as falling between 28 and 32 mph. Stationary traffic enforcement radar must occupy a location above or to the side of the road, so the user must understand trigonometry to accurately estimate vehicle speed as the direction changes while a single vehicle moves within the field of view. Actual vehicle speed and radar measurement thus are rarely the same due to [ 10 ] what is known as the cosine effect , however, for all practical purposes this difference in actual speed and measured speed is inconsequential, generally being less than 1 mph difference, as police are trained to position the radar to minimize this inaccuracy and when present the error is always in the favor of the driver reporting a lower than actual speed. Additionally, the placement of the radar can be important as well to avoid large reflective surfaces near the radar. Such reflective surfaces can create a multi-path scenario where the radar beam can be reflected off of the unintended reflective target and find another target and return thereby causing a reading that can be confused for the traffic being monitored. [ citation needed ] However, MythBusters did an episode on trying to get the gun to have incorrect readings by changing the surface of the passing object and found no significant effect. [ 11 ] [ 12 ] Radar speed guns do not differentiate between targets in traffic, and proper operator training is essential for accurate speed enforcement. This inability to differentiate among targets in the radar's field of view is the primary reason for the operator being required to consistently and accurately visually estimate target speeds to within +/-2 mph, so that, for example if there are seven targets in the radar's field of view and the operator is able to visually estimate the speed of six of those targets as approximately 40 mph and visually estimate the speed of one of those targets as approximately 55 mph and the radar unit is displaying a reading of 56 mph it becomes clear which target's speed the unit is measuring. In moving radar operation, another potential limitation occurs when the radar's patrol speed locks onto other moving targets rather than the actual ground speed. This can occur if the position of the radar is too close to a larger reflective target such as a tractor trailer. To help alleviate this the use of secondary speed inputs from the vehicle's CAN bus, VSS signal, or the use of a GPS-measured speed can help to reduce errors by giving a secondary speed to compare the measured speed against. The primary limitation of hand held and mobile radar devices is size. An antenna diameter of less than several feet limits directionality, which can only partly be compensated for by increasing the frequency of the wave. Size limitations can cause hand-held and mobile radar devices to produce measurements from multiple objects within the field of view of the user. The antenna on some of the most common hand-held devices is only 2 inches (5.1 cm) in diameter. The beam of energy produced by an antenna of this size using X-band frequencies occupies a cone that extends about 22 degrees surrounding the line of sight, 44 degrees in total width. This beam is called the main lobe . There is also a side lobe extending from 22 to 66 degrees away from the line of sight, and other lobes as well, but side lobes are about 20 times (13 dB ) less sensitive than the main lobe, although they will detect moving objects close by. The primary field of view is about 130 degrees wide. K-band reduces this field of view to about 65 degrees by increasing the frequency of the wave. Ka-band reduces this further to about 40 degrees. Side lobe detections can be eliminated using side lobe blanking which narrows the field of view, but the additional antennas and complex circuitry impose size and price constraints that limit this to applications for the military, air traffic control, and weather agencies. Mobile weather radar is mounted on semi-trailer trucks in order to narrow the beam. A second limitation for hand-held devices is that they have to use continuous-wave radar to make them light enough to be mobile. Speed measurements are only reliable when the distance at which a specific measurement has been recorded is known. Distance measurements require pulsed operation or cameras when more than one moving object is within the field of view. Continuous-wave radar may be aimed directly at a vehicle 100 yards away but produce a speed measurement from a second vehicle 1 mile away when pointed down a straight roadway. Once again falling back on the training and certification requirement for consistent and accurate visual estimation so that operators can be certain which object's speed the device has measured without distance information, which is unavailable with continuous wave radar. Some sophisticated devices may produce different speed measurements from multiple objects within the field of view. This is used to allow the speed-gun to be used from a moving vehicle, where a moving and a stationary object must be targeted simultaneously, and some of the most sophisticated units are capable of displaying up to four separate target speeds while operating in moving mode once again emphasizing the importance of the operators' ability to consistently and accurately visually estimate speed. The environment and locality in which a measurement is taken can also play a role. Using a hand-held radar to scan traffic on an empty road while standing in the shade of a large tree, for example, might risk detecting the motion of the leaves and branches if the wind is blowing hard (side lobe detection). There may be an unnoticed airplane overhead, particularly if there is an airport nearby, which again emphasizes the importance of proper operator training. Conventional radar gun limitations can be corrected with a camera aimed along the line of sight. Cameras are associated with automated ticketing machines (known in the UK as speed cameras ) where the radar is used to trigger a camera. The radar speed threshold is set at or above the maximum legal vehicle speed. The radar triggers the camera to take several pictures when a nearby object exceeds this speed. Two pictures are required to determine vehicle speed using roadway survey markings. This can be reliable for traffic in city environments when multiple moving objects are within the field of view. It is the camera, however, and its timing information, in this case, that determines the speed of an individual vehicle, the radar gun simply alerting the camera to start recording. Laser devices, such as a LIDAR speed gun , are capable of producing reliable range and speed measurements in typical urban and suburban traffic environments without the site survey limitation and cameras. This is reliable in city traffic because LIDAR has directionality similar to a typical firearm because the beam is shaped more like a pencil that produces measurement only from the object it has been aimed at.
https://en.wikipedia.org/wiki/Radar_speed_gun
Radar warning receiver ( RWR ) systems detect the radio emissions of radar systems. Their primary purpose is to issue a warning when a radar signal that might be a threat is detected, like a fighter aircraft 's fire control radar . The warning can then be used, manually or automatically, to evade the detected threat. RWR systems can be installed in all kind of airborne, sea-based, and ground-based assets such as aircraft , ships , automobiles , military bases . Depending on the market the RWR system is designed for, it can be as simple as detecting the presence of energy in a specific radar band, such as the frequencies of known surface-to-air missile systems. Modern RWR systems are often capable of classifying the source of the radar by the signal's strength, phase and signal details. The information about the signal's strength and waveform can then be used to estimate the type of threat the detected radar poses. The RWR usually has a visual display somewhere prominent in the cockpit (in some modern aircraft, in multiple locations in the cockpit) and also generates audible tones which feed into the pilot's (and perhaps RIO /co-pilot/ GIB 's in a multi-seat aircraft) headset. The visual display often takes the form of a circle, with symbols displaying the detected radars according to their direction relative to the current aircraft heading (i.e. a radar straight ahead displayed at the top of the circle, directly behind at the bottom, etc.). The distance from the center of the circle, depending on the type of unit, can represent the estimated distance from the generating radar, or to categorize the severity of threats to the aircraft, with tracking radars placed closer to the center than search radars. The symbol itself is related to the type of radar or the type of vehicle that carries it, often with a distinction made between ground-based radars and airborne radars. The typical airborne RWR system consists of multiple wideband antennas placed around the aircraft which receive the radar signals. The receiver periodically scans across the frequency band and determines various parameters of the received signals, like frequency, signal shape, direction of arrival, pulse repetition frequency , etc. By using these measurements, the signals are first deinterleaved to sort the mixture of incoming signals by emitter type. These data are then further sorted by threat priority and displayed. The RWR is used for identifying, avoiding, evading or engaging threats. For example, a fighter aircraft on a combat air patrol (CAP) might notice enemy fighters on the RWR and subsequently use its own radar set to find and eventually engage the threat. In addition, the RWR helps identify and classify threats—it's hard to tell [ citation needed ] which blips on a radar console-screen are dangerous, but since different fighter aircraft typically have different types of radar sets, once they turn them on and point them near the aircraft in question it may be able to tell, by the direction and strength of the signal, which of the blips is which type of fighter. A non-combat aircraft , or one attempting to avoid engagements, might turn its own radar off and attempt to steer around threats detected on the RWR. Especially at high altitude (more than 30,000 feet AGL ), very few [ citation needed ] threats exist that don't emit radiation. As long as the pilot is careful to check for aircraft that might try to sneak up without radar, say with the assistance of AWACS or GCI , it should be able to steer clear of SAMs, fighter aircraft and high altitude, radar-directed AAA . SEAD and ELINT aircraft often have sensitive and sophisticated RWR equipment like the U.S. HTS ( HARM targeting system) pod which is able to find and classify threats which are much further away than those detected by a typical RWR, and may be able to overlay threat circles on a map in the aircraft's multi-function display (MFD), providing much better [ 1 ] information for avoiding or engaging threats, and may even store information to be analyzed later or transmitted to the ground to help the commanders plan future missions. The RWR can be an important tool for evading threats if avoidance has failed. For example, if a SAM system or enemy fighter aircraft has fired a missile (for example, a SARH -guided missile ) at the aircraft, the RWR may be able to detect the change in mode that the radar must use to guide the missile and notify the pilot with much more insistent warning tones and flashing, bracketed symbols on the RWR display. The pilot then can take evasive action to break the missile lock-on or dodge the missile . The pilot may even be able to visually acquire the missile after being alerted to the possible launch. What's more, if an actively guided missile is tracking the aircraft, the pilot can use the direction and distance display of the RWR to work out which evasive maneuvers to perform to outrun or dodge the missile. For example, the rate of closure and aspect of the incoming missile may allow the pilot to determine that if they dive away from the missile, it is unlikely to catch up, or if it is closing fast, that it is time to jettison external supplies and turn toward the missile in an attempt to out-turn it. The RWR may be able to send a signal to another defensive system on board the aircraft, such as a Countermeasure Dispensing System (CMDS), which can eject countermeasures such as chaff , to aid in avoidance.
https://en.wikipedia.org/wiki/Radar_warning_receiver
In computational learning theory ( machine learning and theory of computation ), Rademacher complexity , named after Hans Rademacher , measures richness of a class of sets with respect to a probability distribution . The concept can also be extended to real valued functions. Given a set A ⊆ R m {\displaystyle A\subseteq \mathbb {R} ^{m}} , the Rademacher complexity of A is defined as follows: [ 1 ] [ 2 ] : 326 where σ 1 , σ 2 , … , σ m {\displaystyle \sigma _{1},\sigma _{2},\dots ,\sigma _{m}} are independent random variables drawn from the Rademacher distribution i.e. Pr ( σ i = + 1 ) = Pr ( σ i = − 1 ) = 1 / 2 {\displaystyle \Pr(\sigma _{i}=+1)=\Pr(\sigma _{i}=-1)=1/2} for i = 1 , 2 , … , m {\displaystyle i=1,2,\dots ,m} , and a = ( a 1 , … , a m ) {\displaystyle a=(a_{1},\ldots ,a_{m})} . Some authors take the absolute value of the sum before taking the supremum, but if A {\displaystyle A} is symmetric this makes no difference. Let S = { z 1 , z 2 , … , z m } ⊂ Z {\displaystyle S=\{z_{1},z_{2},\dots ,z_{m}\}\subset Z} be a sample of points and consider a function class F {\displaystyle {\mathcal {F}}} of real-valued functions over Z {\displaystyle Z} . Then, the empirical Rademacher complexity of F {\displaystyle {\mathcal {F}}} given S {\displaystyle S} is defined as: This can also be written using the previous definition: [ 2 ] : 326 where F ∘ S {\displaystyle {\mathcal {F}}\circ S} denotes function composition , i.e.: The worst case empirical Rademacher complexity is Rad ¯ m ( F ) = sup S = { z 1 , … , z m } Rad S ⁡ ( F ) {\displaystyle {\overline {\operatorname {Rad} }}_{m}({\mathcal {F}})=\sup _{S=\{z_{1},\dots ,z_{m}\}}\operatorname {Rad} _{S}({\mathcal {F}})} Let P {\displaystyle P} be a probability distribution over Z {\displaystyle Z} . The Rademacher complexity of the function class F {\displaystyle {\mathcal {F}}} with respect to P {\displaystyle P} for sample size m {\displaystyle m} is: where the above expectation is taken over an identically independently distributed (i.i.d.) sample S = ( z 1 , z 2 , … , z m ) {\displaystyle S=(z_{1},z_{2},\dots ,z_{m})} generated according to P {\displaystyle P} . The Rademacher complexity is typically applied on a function class of models that are used for classification, with the goal of measuring their ability to classify points drawn from a probability space under arbitrary labellings. When the function class is rich enough, it contains functions that can appropriately adapt for each arrangement of labels, simulated by the random draw of σ i {\displaystyle \sigma _{i}} under the expectation, so that this quantity in the sum is maximised. 1. A {\displaystyle A} contains a single vector, e.g., A = { ( a , b ) } ⊂ R 2 {\displaystyle A=\{(a,b)\}\subset \mathbb {R} ^{2}} . Then: The same is true for every singleton hypothesis class. [ 3 ] : 56 2. A {\displaystyle A} contains two vectors, e.g., A = { ( 1 , 1 ) , ( 1 , 2 ) } ⊂ R 2 {\displaystyle A=\{(1,1),(1,2)\}\subset \mathbb {R} ^{2}} . Then: The Rademacher complexity can be used to derive data-dependent upper-bounds on the learnability of function classes. Intuitively, a function-class with smaller Rademacher complexity is easier to learn. In machine learning , it is desired to have a training set that represents the true distribution of some sample data S {\displaystyle S} . This can be quantified using the notion of representativeness . Denote by P {\displaystyle P} the probability distribution from which the samples are drawn. Denote by H {\displaystyle H} the set of hypotheses (potential classifiers) and denote by F {\displaystyle {\mathcal {F}}} the corresponding set of error functions, i.e., for every hypothesis h ∈ H {\displaystyle h\in H} , there is a function f h ∈ F {\displaystyle f_{h}\in F} , that maps each training sample (features,label) to the error of the classifier h {\displaystyle h} (note in this case hypothesis and classifier are used interchangeably). For example, in the case that h {\displaystyle h} represents a binary classifier, the error function is a 0–1 loss function, i.e. the error function f h {\displaystyle f_{h}} returns 0 if h {\displaystyle h} correctly classifies a sample and 1 else. We omit the index and write f {\displaystyle f} instead of f h {\displaystyle f_{h}} when the underlying hypothesis is irrelevant. Define: The representativeness of the sample S {\displaystyle S} , with respect to P {\displaystyle P} and F {\displaystyle {\mathcal {F}}} , is defined as: Smaller representativeness is better, since it provides a way to avoid overfitting : it means that the true error of a classifier is not much higher than its estimated error, and so selecting a classifier that has low estimated error will ensure that the true error is also low. Note however that the concept of representativeness is relative and hence can not be compared across distinct samples. The expected representativeness of a sample can be bounded above by the Rademacher complexity of the function class: If F {\displaystyle {\mathcal {F}}} is a set of functions with range within [ 0 , 1 ] {\displaystyle [0,1]} , then [ 2 ] : 326 [ 4 ] Furthermore, the representativeness is concentrated around its expectation: [ 4 ] For any ϵ {\displaystyle \epsilon } , with probability ≥ 1 − 2 e − 2 ϵ 2 m {\displaystyle \geq 1-2e^{-2\epsilon ^{2}m}} , Rep P ⁡ ( F , S ) ∈ E S ∼ P m [ Rep P ⁡ ( F , S ) ] ± ϵ {\displaystyle \operatorname {Rep} _{P}({\mathcal {F}},S)\in \mathbb {E} _{S\sim P^{m}}[\operatorname {Rep} _{P}({\mathcal {F}},S)]\pm \epsilon } The Rademacher complexity is a theoretical justification for empirical risk minimization . When the error function is binary (0-1 loss), for every δ > 0 {\displaystyle \delta >0} , with probability at least 1 − δ {\displaystyle 1-\delta } . [ 2 ] : 328 There exists a constant c > 0 {\displaystyle c>0} , such that when the error function is squared ℓ ( y ^ , y ) := ( y ^ − y ) 2 {\displaystyle \ell ({\hat {y}},y):=({\hat {y}}-y)^{2}} , and the function class F {\displaystyle {\mathcal {F}}} consists of functions with range within [ − 1 , + 1 ] {\displaystyle [-1,+1]} , then for any δ > 0 {\displaystyle \delta >0} L P ( f ) − L S ( f ) ≤ c [ L S ( f ) + ( ln ⁡ m ) 4 Rad ¯ m ( F ) 2 + ln ⁡ ( 1 / δ ) m ] , ∀ f ∈ F {\displaystyle L_{P}(f)-L_{S}(f)\leq c\left[L_{S}(f)+(\ln m)^{4}{\overline {\operatorname {Rad} }}_{m}({\mathcal {F}})^{2}+{\frac {\ln(1/\delta )}{m}}\right],\quad \forall f\in {\mathcal {F}}} with probability at least 1 − δ {\displaystyle 1-\delta } . [ 4 ] : Theorem 2.2 Let the Bayes risk L ∗ = inf f L P ( f ) {\displaystyle L^{*}=\inf _{f}L_{P}(f)} , where f {\displaystyle f} can be any measurable function. Let the function class F {\displaystyle {\mathcal {F}}} be split into "complexity classes" F r {\displaystyle {\mathcal {F}}_{r}} , where r ∈ R {\displaystyle r\in \mathbb {R} } are levels of complexity. Let p r {\displaystyle p_{r}} be real numbers. Let the complexity measure function p {\displaystyle p} be defined such that p ( f ) := min { p r : f ∈ F r } {\displaystyle p(f):=\min\{p_{r}:f\in {\mathcal {F}}_{r}\}} . For any dataset S {\displaystyle S} , let f ^ {\displaystyle {\hat {f}}} be a minimizer of L S ( f ) + p ( f ) {\displaystyle L_{S}(f)+p(f)} . If sup f ∈ F r | L P ( f ) − L S ( f ) | ≤ p r , ∀ r {\displaystyle \sup _{f\in {\mathcal {F}}_{r}}|L_{P}(f)-L_{S}(f)|\leq p_{r},\quad \forall r} then we have the oracle inequality L ( f ^ ) − L ∗ ≤ inf r ( inf f ∈ F r L ( f ) − L ∗ + 2 p r ) {\displaystyle L({\hat {f}})-L^{*}\leq \inf _{r}\left(\inf _{f\in {\mathcal {F}}_{r}}L(f)-L^{*}+2p_{r}\right)} Define f r ∗ ∈ arg ⁡ min f ∈ F r L ( f ) {\displaystyle f_{r}^{*}\in \arg \min _{f\in {\mathcal {F}}_{r}}L(f)} . If we further assume r ≤ s implies F r ⊆ F s and p r ≤ p s {\displaystyle r\leq s{\text{ implies }}{\mathcal {F}}_{r}\subseteq {\mathcal {F}}_{s}{\text{ and }}p_{r}\leq p_{s}} and ∀ r , sup f ∈ F r ( L P ( f ) − L P ( f r ∗ ) − 2 ( L S ( f ) − L S ( f r ∗ ) ) ) ≤ 2 p r / 7 sup f ∈ F r ( L S ( f ) − L S ( f r ∗ ) − 2 ( L P ( f ) − L P ( f r ∗ ) ) ) ≤ 2 p r / 7 {\displaystyle {\begin{aligned}\forall r,\sup _{f\in {\mathcal {F}}_{r}}\left(L_{P}(f)-L_{P}\left(f_{r}^{*}\right)-2\left(L_{S}(f)-L_{S}\left(f_{r}^{*}\right)\right)\right)&\leq 2p_{r}/7\\\sup _{f\in {\mathcal {F}}_{r}}\left(L_{S}(f)-L_{S}\left(f_{r}^{*}\right)-2\left(L_{P}(f)-L_{P}\left(f_{r}^{*}\right)\right)\right)&\leq 2p_{r}/7\end{aligned}}} then we have the oracle inequality L P ( f ^ ) − L ∗ ≤ inf r ( inf f ∈ F r L P ( f ) − L ∗ + 3 p r ) {\displaystyle L_{P}({\widehat {f}})-L^{*}\leq \inf _{r}\left(\inf _{f\in {\mathcal {F}}_{r}}L_{P}(f)-L^{*}+3p_{r}\right)} [ 4 ] : Theorem 2.3 Since smaller Rademacher complexity is better, it is useful to have upper bounds on the Rademacher complexity of various function sets. The following rules can be used to upper-bound the Rademacher complexity of a set A ⊂ R m {\displaystyle A\subset \mathbb {R} ^{m}} . [ 2 ] : 329–330 1. If all vectors in A {\displaystyle A} are translated by a constant vector a 0 ∈ R m {\displaystyle a_{0}\in \mathbb {R} ^{m}} , then Rad( A ) does not change. 2. If all vectors in A {\displaystyle A} are multiplied by a scalar c ∈ R {\displaystyle c\in \mathbb {R} } , then Rad( A ) is multiplied by | c | {\displaystyle |c|} . 3. Rad ⁡ ( A + B ) = Rad ⁡ ( A ) + Rad ⁡ ( B ) {\displaystyle \operatorname {Rad} (A+B)=\operatorname {Rad} (A)+\operatorname {Rad} (B)} . [ 3 ] : 56 4. (Kakade & Tewari Lemma) If all vectors in A {\displaystyle A} are operated by a Lipschitz function , then Rad( A ) is (at most) multiplied by the Lipschitz constant of the function. In particular, if all vectors in A {\displaystyle A} are operated by a contraction mapping , then Rad( A ) strictly decreases. 5. The Rademacher complexity of the convex hull of A {\displaystyle A} equals Rad( A ). 6. (Massart Lemma) The Rademacher complexity of a finite set grows logarithmically with the set size. Formally, let A {\displaystyle A} be a set of N {\displaystyle N} vectors in R m {\displaystyle \mathbb {R} ^{m}} , and let a ¯ {\displaystyle {\bar {a}}} be the mean of the vectors in A {\displaystyle A} . Then: In particular, if A {\displaystyle A} is a set of binary vectors, the norm is at most m {\displaystyle {\sqrt {m}}} , so: Let H {\displaystyle H} be a set family whose VC dimension is d {\displaystyle d} . It is known that the growth function of H {\displaystyle H} is bounded as: This means that, for every set h {\displaystyle h} with at most m {\displaystyle m} elements, | H ∩ h | ≤ ( e m / d ) d {\displaystyle |H\cap h|\leq (em/d)^{d}} . The set-family H ∩ h {\displaystyle H\cap h} can be considered as a set of binary vectors over R m {\displaystyle \mathbb {R} ^{m}} . Substituting this in Massart's lemma gives: With more advanced techniques ( Dudley's entropy bound and Haussler's upper bound [ 5 ] ) one can show, for example, that there exists a constant C {\displaystyle C} , such that any class of { 0 , 1 } {\displaystyle \{0,1\}} -indicator functions with Vapnik–Chervonenkis dimension d {\displaystyle d} has Rademacher complexity upper-bounded by C d m {\displaystyle C{\sqrt {\frac {d}{m}}}} . The following bounds are related to linear operations on S {\displaystyle S} – a constant set of m {\displaystyle m} vectors in R n {\displaystyle \mathbb {R} ^{n}} . [ 2 ] : 332–333 1. Define A 2 = { ( w ⋅ x 1 , … , w ⋅ x m ) ∣ ‖ w ‖ 2 ≤ 1 } = {\displaystyle A_{2}=\{(w\cdot x_{1},\ldots ,w\cdot x_{m})\mid \|w\|_{2}\leq 1\}=} the set of dot-products of the vectors in S {\displaystyle S} with vectors in the unit ball . Then: 2. Define A 1 = { ( w ⋅ x 1 , … , w ⋅ x m ) ∣ ‖ w ‖ 1 ≤ 1 } = {\displaystyle A_{1}=\{(w\cdot x_{1},\ldots ,w\cdot x_{m})\mid \|w\|_{1}\leq 1\}=} the set of dot-products of the vectors in S {\displaystyle S} with vectors in the unit ball of the 1-norm. Then: The following bound relates the Rademacher complexity of a set A {\displaystyle A} to its external covering number – the number of balls of a given radius r {\displaystyle r} whose union contains A {\displaystyle A} . The bound is attributed to Dudley. [ 2 ] : 338 Suppose A ⊂ R m {\displaystyle A\subset \mathbb {R} ^{m}} is a set of vectors whose length (norm) is at most c {\displaystyle c} . Then, for every integer M > 0 {\displaystyle M>0} : In particular, if A {\displaystyle A} lies in a d -dimensional subspace of R m {\displaystyle \mathbb {R} ^{m}} , then: Substituting this in the previous bound gives the following bound on the Rademacher complexity: Gaussian complexity is a similar complexity with similar physical meanings, and can be obtained from the Rademacher complexity using the random variables g i {\displaystyle g_{i}} instead of σ i {\displaystyle \sigma _{i}} , where g i {\displaystyle g_{i}} are Gaussian i.i.d. random variables with zero-mean and variance 1, i.e. g i ∼ N ( 0 , 1 ) {\displaystyle g_{i}\sim {\mathcal {N}}(0,1)} . Gaussian and Rademacher complexities are known to be equivalent up to logarithmic factors. Given a set A ⊆ R n {\displaystyle A\subseteq \mathbb {R} ^{n}} then it holds that [ 6 ] : G ( A ) 2 log ⁡ n ≤ Rad ( A ) ≤ π 2 G ( A ) {\displaystyle {\frac {G(A)}{2{\sqrt {\log {n}}}}}\leq {\text{Rad}}(A)\leq {\sqrt {\frac {\pi }{2}}}G(A)} Where G ( A ) {\displaystyle G(A)} is the Gaussian Complexity of A. As an example, consider the rademacher and gaussian complexities of the L1 ball. The Rademacher complexity is given by exactly 1, whereas the Gaussian complexity is on the order of log ⁡ d {\displaystyle {\sqrt {\log d}}} (which can be shown by applying known properties of suprema of a set of subgaussian random variables). [ 6 ]
https://en.wikipedia.org/wiki/Rademacher_complexity
In mathematical analysis , the Rademacher–Menchov theorem , introduced by Rademacher ( 1922 ) and Menchoff ( 1923 ), gives a sufficient condition for a series of orthogonal functions on an interval to converge almost everywhere . If the coefficients c ν of a series of bounded orthogonal functions on an interval satisfy then the series converges almost everywhere.
https://en.wikipedia.org/wiki/Rademacher–Menchov_theorem
Radenska is a Slovenia -based worldwide known brand of mineral water , trademark of Radenska d.o.o. company. It is one of the oldest Slovenian brands . Development of the mineral water company started at Radenci in 1869, when Karl Henn , owner of the land, filled the first bottles of mineral water. The Radenska Water company Spa became one of the largest and most recognized companies in the Yugoslav Kingdom. The company remained a family business. Josef Karl Hoehn and Maria Karolina Henn continued to build the business, passing it on to their son Josef Hoehn. Their son, Werner Johann Josef Hoehn, married Wilhelmine Witlschnig, who proved to be a formidable leader and entrepreneur. Her determination and skills during a time when women leaders faced many structural obstacles proved to be critical in building and expanding Radenska Water and their hotel and spa business. Werner Hoehn died in a motorcycle accident, and the widowed Wilhelmina successfully grew the business. She married Dr Ante Saric, a prominent physician. Along with their children Wilhelm and Rudolf, they built a company that served the greater Yugoslavia and included the factory in Radenska, the Radenci Spa Resort, and the Šmarješke Toplice spa, along with over 1,000 additional parcels, including forested areas, farms, and vineyards. In Nazi April War, during which Hitler took control of Yugoslavia, became a pivotal moment for the family and Radenska. Earlier in the war, their son Wilhelm had perished. Despite their local prominence, Ante and Wilhelmina were leaders in the Partisan Resistance fighting the Nazis. In 1945, they were discovered and the Nazis publicly executed Ante Saric. The war's end came quickly, and Josef Tito rose to power. Under his Communist iron rule, he nationalized the family's businesses, land, and homes. Wilhelmina had lost her first husband, her son, and her second husband. Along with her 16-year-old son, Rudolf, Wilhelmina was forced to leave Yugoslavia to escape the Tito regime. Starting over 30 years ago, Rudolf and now his children have fought to have the Radenska Water and Spa restored to the family. Through dozens of court hearings, the government of Slovenia has fought to prevent Radenska's return. [ 1 ] In December 2014, the Kofola Group acquired Radenska from the Slovenian government at a valuation of approximately 70M Euros. [ 2 ] Despite the Slovenian government's illegitimate ownership history, Kofola continues to fight efforts to restore the business's legitimate ownership to the family. [ 3 ] Mineral water brand name Radenska Three Hearts (Radenska Tri srca) has been in use since 1936. It was designed in 1931 by the illustrator Milko Bambič . According to the author, the three hearts symbolised three former nations of the Kingdom of Yugoslavia : Serbs, Croats, and Slovenes. [ 4 ] The company is the title sponsor of UCI Continental cycling team Pogi Team Gusto Ljubljana . [ citation needed ] This non-alcoholic drink –related article is a stub . You can help Wikipedia by expanding it . This Slovenia -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radenska
Radial chromatography is a form of chromatography , a preparatory technique for separating chemical mixtures . It can also be referred to as centrifugal thin-layer chromatography . It is a common technique for isolating compounds and can be compared to column chromatography as a similar process. A common device used for this technique is a Chromatotron . [ 1 ] Here the solvent travels from the center of the circular chromatography silica layered on a plate towards the periphery. The entire system is kept covered in order to prevent evaporation of solvent while developing a chromatogram . The wick at the center of system drips solvent into the system which the provides the mobile phase and moves the sample radially to form the sample spots of different compounds as concentric rings . Continuous annular chromatography uses a stationary phase which is filled into an annular gap. The eluent is continuously fed across the whole bed interface also the feed is continuously fed at the top of the stationary however only at a certain point and not a cross the whole bed. The stationary phase is then rotated with a certain rotation speed. The rotation speed, eluent and feed flow rates have to be defined precisely such that the collector vessels only collect the correct substance. The retention times are transformed into the respective retention angles. [ citation needed ]
https://en.wikipedia.org/wiki/Radial_chromatography
In statistical mechanics , the radial distribution function , (or pair correlation function ) g ( r ) {\displaystyle g(r)} in a system of particles (atoms, molecules, colloids, etc.), describes how density varies as a function of distance from a reference particle. If a given particle is taken to be at the origin O , and if ρ = N / V {\displaystyle \rho =N/V} is the average number density of particles, then the local time-averaged density at a distance r {\displaystyle r} from O is ρ g ( r ) {\displaystyle \rho g(r)} . This simplified definition holds for a homogeneous and isotropic system. A more general case will be considered below. In simplest terms it is a measure of the probability of finding a particle at a distance of r {\displaystyle r} away from a given reference particle, relative to that for an ideal gas. The general algorithm involves determining how many particles are within a distance of r {\displaystyle r} and r + d r {\displaystyle r+dr} away from a particle. This general theme is depicted to the right, where the red particle is our reference particle, and the blue particles are those whose centers are within the circular shell, dotted in orange. The radial distribution function is usually determined by calculating the distance between all particle pairs and binning them into a histogram. The histogram is then normalized with respect to an ideal gas, where particle histograms are completely uncorrelated. For three dimensions, this normalization is the number density of the system ( ρ ) {\displaystyle (\rho )} multiplied by the volume of the spherical shell, which symbolically can be expressed as ρ 4 π r 2 d r {\displaystyle \rho \,4\pi r^{2}dr} . Given a potential energy function, the radial distribution function can be computed either via computer simulation methods like the Monte Carlo method , or via the Ornstein–Zernike equation , using approximative closure relations like the Percus–Yevick approximation or the hypernetted-chain theory . It can also be determined experimentally, by radiation scattering techniques or by direct visualization for large enough (micrometer-sized) particles via traditional or confocal microscopy. The radial distribution function is of fundamental importance since it can be used, using the Kirkwood–Buff solution theory , to link the microscopic details to macroscopic properties. Moreover, by the reversion of the Kirkwood–Buff theory, it is possible to attain the microscopic details of the radial distribution function from the macroscopic properties. The radial distribution function may also be inverted to predict the potential energy function using the Ornstein–Zernike equation or structure-optimized potential refinement. [ 1 ] Consider a system of N {\displaystyle N} particles in a volume V {\displaystyle V} (for an average number density ρ = N / V {\displaystyle \rho =N/V} ) and at a temperature T {\displaystyle T} (let us also define β = 1 k T {\displaystyle \textstyle \beta ={\frac {1}{kT}}} ; k {\displaystyle k} is the Boltzmann constant). The particle coordinates are r i {\displaystyle \mathbf {r} _{i}} , with i = 1 , … , N {\displaystyle \textstyle i=1,\,\ldots ,\,N} . The potential energy due to the interaction between particles is U N ( r 1 … , r N ) {\displaystyle \textstyle U_{N}(\mathbf {r} _{1}\,\ldots ,\,\mathbf {r} _{N})} and we do not consider the case of an externally applied field. The appropriate averages are taken in the canonical ensemble ( N , V , T ) {\displaystyle (N,V,T)} , with Z N = ∫ ⋯ ∫ e − β U N d r 1 ⋯ d r N {\displaystyle \textstyle Z_{N}=\int \cdots \int \mathrm {e} ^{-\beta U_{N}}\mathrm {d} \mathbf {r} _{1}\cdots \mathrm {d} \mathbf {r} _{N}} the configurational integral, taken over all possible combinations of particle positions. The probability of an elementary configuration, namely finding particle 1 in d r 1 {\displaystyle \textstyle \mathrm {d} \mathbf {r} _{1}} , particle 2 in d r 2 {\displaystyle \textstyle \mathrm {d} \mathbf {r} _{2}} , etc. is given by The total number of particles is huge, so that P ( N ) {\displaystyle P^{(N)}} in itself is not very useful. However, one can also obtain the probability of a reduced configuration, where the positions of only n < N {\displaystyle n<N} particles are fixed, in r 1 … , r n {\displaystyle \textstyle \mathbf {r} _{1}\,\ldots ,\,\mathbf {r} _{n}} , with no constraints on the remaining N − n {\displaystyle N-n} particles. To this end, one has to integrate ( 1 ) over the remaining coordinates r n + 1 … , r N {\displaystyle \mathbf {r} _{n+1}\,\ldots ,\,\mathbf {r} _{N}} : If the particles are non-interacting, in the sense that the potential energy of each particle does not depend on any of the other particles, U N ( r 1 , … , r N ) = ∑ i = 1 N U 1 ( r i ) {\textstyle U_{N}(\mathbf {r} _{1},\dots ,\mathbf {r} _{N})=\sum _{i=1}^{N}U_{1}(\mathbf {r} _{i})} , then the partition function factorizes, and the probability of an elementary configuration decomposes with independent arguments to a product of single particle probabilities, Z N = ∏ i = 1 N ∫ d 3 r i e − β U 1 = Z 1 N P ( n ) ( r 1 , … , r n ) = P ( 1 ) ( r 1 ) ⋯ P ( 1 ) ( r n ) {\displaystyle {\begin{aligned}Z_{N}&=\prod _{i=1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}e^{-\beta U_{1}}=Z_{1}^{N}\\P^{(n)}(\mathbf {r} _{1},\dots ,\mathbf {r} _{n})&=P^{(1)}(\mathbf {r} _{1})\cdots P^{(1)}(\mathbf {r} _{n})\end{aligned}}} Note how for non-interacting particles the probability is symmetric in its arguments. This is not true in general, and the order in which the positions occupy the argument slots of P ( n ) {\displaystyle P^{(n)}} matters. Given a set of positions, the way that the N {\displaystyle N} particles can occupy those positions is N ! {\displaystyle N!} The probability that those positions ARE occupied is found by summing over all configurations in which a particle is at each of those locations. This can be done by taking every permutation , π {\displaystyle \pi } , in the symmetric group on N {\displaystyle N} objects, S N {\displaystyle S_{N}} , to write ∑ π ∈ S N P ( N ) ( r π ( 1 ) , … , r π ( N ) ) {\textstyle \sum _{\pi \in S_{N}}P^{(N)}(\mathbf {r} _{\pi (1)},\ldots ,\mathbf {r} _{\pi (N)})} . For fewer positions, we integrate over extraneous arguments, and include a correction factor to prevent overcounting, ρ ( n ) ( r 1 , … , r n ) = 1 ( N − n ) ! ( ∏ i = n + 1 N ∫ d 3 r i ) ∑ π ∈ S N P ( N ) ( r π ( 1 ) , … , r π ( N ) ) {\displaystyle {\begin{aligned}\rho ^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})&={\frac {1}{(N-n)!}}\left(\prod _{i=n+1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}\right)\sum _{\pi \in S_{N}}P^{(N)}(\mathbf {r} _{\pi (1)},\ldots ,\mathbf {r} _{\pi (N)})\\\end{aligned}}} This quantity is called the n-particle density function. For indistinguishable particles, one could permute all the particle positions, ∀ i , r i → r π ( i ) {\displaystyle \forall i,\mathbf {r} _{i}\rightarrow \mathbf {r} _{\pi (i)}} , without changing the probability of an elementary configuration, P ( r π ( 1 ) , … , r π ( N ) ) = P ( r 1 , … , r N ) {\displaystyle P(\mathbf {r} _{\pi (1)},\dots ,\mathbf {r} _{\pi (N)})=P(\mathbf {r} _{1},\dots ,\mathbf {r} _{N})} , so that the n-particle density function reduces to ρ ( n ) ( r 1 , … , r n ) = N ! ( N − n ) ! P ( n ) ( r 1 , … , r n ) {\displaystyle {\begin{aligned}\rho ^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})&={\frac {N!}{(N-n)!}}P^{(n)}(\mathbf {r} _{1},\ldots ,\mathbf {r} _{n})\end{aligned}}} Integrating the n-particle density gives the permutation factor N P n {\displaystyle _{N}P_{n}} , counting the number of ways one can sequentially pick particles to place at the n {\displaystyle n} positions out of the total N {\displaystyle N} particles. Now let's turn to how we interpret this functions for different values of n {\displaystyle n} . For n = 1 {\displaystyle n=1} , we have the one-particle density. For a crystal it is a periodic function with sharp maxima at the lattice sites. For a non-interacting gas, it is independent of the position r 1 {\displaystyle \textstyle \mathbf {r} _{1}} and equal to the overall number density, ρ {\displaystyle \rho } , of the system. To see this first note that U N = ∞ {\displaystyle U_{N}=\infty } in the volume occupied by the gas, and 0 everywhere else. The partition function in this case is from which the definition gives the desired result In fact, for this special case every n-particle density is independent of coordinates, and can be computed explicitly ρ ( n ) ( r 1 , … , r n ) = N ! ( N − n ) ! 1 V N ∏ i = n + 1 N ∫ d 3 r i 1 = N ! ( N − n ) ! 1 V n {\displaystyle {\begin{aligned}\rho ^{(n)}(\mathbf {r} _{1},\dots ,\mathbf {r} _{n})&={\frac {N!}{(N-n)!}}{\frac {1}{V^{N}}}\prod _{i=n+1}^{N}\int \mathrm {d} ^{3}\mathbf {r} _{i}1\\&={\frac {N!}{(N-n)!}}{\frac {1}{V^{n}}}\end{aligned}}} For N ≫ n {\displaystyle N\gg n} , the non-interacting n-particle density is approximately ρ non-interacting ( n ) ( r 1 , … , r N ) = ( 1 − n ( n − 1 ) / 2 N + ⋯ ) ρ n ≈ ρ n {\displaystyle \rho _{\text{non-interacting}}^{(n)}(\mathbf {r} _{1},\dots ,\mathbf {r} _{N})=\left(1-n(n-1)/2N+\cdots \right)\rho ^{n}\approx \rho ^{n}} . [ 2 ] With this in hand, the n-point correlation function g ( n ) {\displaystyle g^{(n)}} is defined by factoring out the non-interacting contribution [ citation needed ] , ρ ( n ) ( r 1 , … , r n ) = ρ non-interacting ( n ) g ( n ) ( r 1 … , r n ) {\displaystyle \rho ^{(n)}(\mathbf {r} _{1},\ldots ,\,\mathbf {r} _{n})=\rho _{\text{non-interacting}}^{(n)}g^{(n)}(\mathbf {r} _{1}\,\ldots ,\,\mathbf {r} _{n})} Explicitly, this definition reads g ( n ) ( r 1 , … , r n ) = V N N ! ( ∏ i = n + 1 N 1 V ∫ d 3 r i ) 1 Z N ∑ π ∈ S N e − β U ( r π ( 1 ) , … , r π ( N ) ) {\displaystyle {\begin{aligned}g^{(n)}(\mathbf {r} _{1},\ldots ,\,\mathbf {r} _{n})&={\frac {V^{N}}{N!}}\left(\prod _{i=n+1}^{N}{\frac {1}{V}}\!\!\int \!\!\mathrm {d} ^{3}\mathbf {r} _{i}\right){\frac {1}{Z_{N}}}\sum _{\pi \in S_{N}}e^{-\beta U(\mathbf {r} _{\pi (1)},\ldots ,\,\mathbf {r} _{\pi (N)})}\end{aligned}}} where it is clear that the n -point correlation function is dimensionless. The second-order correlation function g ( 2 ) ( r 1 , r 2 ) {\displaystyle g^{(2)}(\mathbf {r} _{1},\mathbf {r} _{2})} is of special importance, as it is directly related (via a Fourier transform ) to the structure factor of the system and can thus be determined experimentally using X-ray diffraction or neutron diffraction . [ 3 ] If the system consists of spherically symmetric particles, g ( 2 ) ( r 1 , r 2 ) {\displaystyle g^{(2)}(\mathbf {r} _{1},\mathbf {r} _{2})} depends only on the relative distance between them, r 12 = r 2 − r 1 {\displaystyle \mathbf {r} _{12}=\mathbf {r} _{2}-\mathbf {r} _{1}} . We will drop the sub- and superscript: g ( r ) ≡ g ( 2 ) ( r 12 ) {\displaystyle \textstyle g(\mathbf {r} )\equiv g^{(2)}(\mathbf {r} _{12})} . Taking particle 0 as fixed at the origin of the coordinates, ρ g ( r ) d 3 r = d n ( r ) {\displaystyle \textstyle \rho g(\mathbf {r} )d^{3}r=\mathrm {d} n(\mathbf {r} )} is the average number of particles (among the remaining N − 1 {\displaystyle N-1} ) to be found in the volume d 3 r {\displaystyle \textstyle d^{3}r} around the position r {\displaystyle \textstyle \mathbf {r} } . We can formally count these particles and take the average via the expression d n ( r ) d 3 r = ⟨ ∑ i ≠ 0 δ ( r − r i ) ⟩ {\displaystyle \textstyle {\frac {\mathrm {d} n(\mathbf {r} )}{d^{3}r}}=\langle \sum _{i\neq 0}\delta (\mathbf {r} -\mathbf {r} _{i})\rangle } , with ⟨ ⋅ ⟩ {\displaystyle \textstyle \langle \cdot \rangle } the ensemble average, yielding: where the second equality requires the equivalence of particles 1 , … , N − 1 {\displaystyle \textstyle 1,\,\ldots ,\,N-1} . The formula above is useful for relating g ( r ) {\displaystyle g(\mathbf {r} )} to the static structure factor S ( q ) {\displaystyle S(\mathbf {q} )} , defined by S ( q ) = ⟨ ∑ i j e − i q ( r i − r j ) ⟩ / N {\displaystyle \textstyle S(\mathbf {q} )=\langle \sum _{ij}\mathrm {e} ^{-i\mathbf {q} (\mathbf {r} _{i}-\mathbf {r} _{j})}\rangle /N} , since we have: and thus: This equation is only valid in the sense of distributions , since g ( r ) {\displaystyle g(\mathbf {r} )} is not normalized: lim r → ∞ g ( r ) = 1 {\displaystyle \textstyle \lim _{r\rightarrow \infty }g(\mathbf {r} )=1} , so that ∫ V d r g ( r ) {\displaystyle \textstyle \int _{V}\mathrm {d} \mathbf {r} g(\mathbf {r} )} diverges as the volume V {\displaystyle V} , leading to a Dirac peak at the origin for the structure factor. Since this contribution is inaccessible experimentally we can subtract it from the equation above and redefine the structure factor as a regular function: Finally, we rename S ( q ) ≡ S ′ ( q ) {\displaystyle S(\mathbf {q} )\equiv S'(\mathbf {q} )} and, if the system is a liquid, we can invoke its isotropy: Evaluating ( 6 ) in q = 0 {\displaystyle q=0} and using the relation between the isothermal compressibility χ T {\displaystyle \textstyle \chi _{T}} and the structure factor at the origin yields the compressibility equation : It can be shown [ 4 ] that the radial distribution function is related to the two-particle potential of mean force w ( 2 ) ( r ) {\displaystyle w^{(2)}(r)} by: In the dilute limit, the potential of mean force is the exact pair potential under which the equilibrium point configuration has a given g ( r ) {\displaystyle g(r)} . If the particles interact via identical pairwise potentials: U N = ∑ i > j = 1 N u ( | r i − r j | ) {\displaystyle \textstyle U_{N}=\sum _{i>j=1}^{N}u(\left|\mathbf {r} _{i}-\mathbf {r} _{j}\right|)} , the average internal energy per particle is: [ 5 ] : Section 2.5 Developing the virial equation yields the pressure equation of state: The radial distribution function is an important measure because several key thermodynamic properties, such as potential energy and pressure can be calculated from it. For a 3-D system where particles interact via pairwise potentials, the potential energy of the system can be calculated as follows: [ 6 ] where N is the number of particles in the system, ρ {\displaystyle \rho } is the number density, u ( r ) {\displaystyle u(r)} is the pair potential . The pressure of the system can also be calculated by relating the 2nd virial coefficient to g ( r ) {\displaystyle g(r)} . The pressure can be calculated as follows: [ 6 ] Note that the results of potential energy and pressure will not be as accurate as directly calculating these properties because of the averaging involved with the calculation of g ( r ) {\displaystyle g(r)} . For dilute systems (e.g. gases), the correlations in the positions of the particles that g ( r ) {\displaystyle g(r)} accounts for are only due to the potential u ( r ) {\displaystyle u(r)} engendered by the reference particle, neglecting indirect effects. In the first approximation, it is thus simply given by the Boltzmann distribution law: If u ( r ) {\displaystyle u(r)} were zero for all r {\displaystyle r} – i.e., if the particles did not exert any influence on each other, then g ( r ) = 1 {\displaystyle g(r)=1} for all r {\displaystyle \mathbf {r} } and the mean local density would be equal to the mean density ρ {\displaystyle \rho } : the presence of a particle at O would not influence the particle distribution around it and the gas would be ideal. For distances r {\displaystyle r} such that u ( r ) {\displaystyle u(r)} is significant, the mean local density will differ from the mean density ρ {\displaystyle \rho } , depending on the sign of u ( r ) {\displaystyle u(r)} (higher for negative interaction energy and lower for positive u ( r ) {\displaystyle u(r)} ). As the density of the gas increases, the low-density limit becomes less and less accurate since a particle situated in r {\displaystyle \mathbf {r} } experiences not only the interaction with the particle at O but also with the other neighbours, themselves influenced by the reference particle. This mediated interaction increases with the density, since there are more neighbours to interact with: it makes physical sense to write a density expansion of g ( r ) {\displaystyle g(r)} , which resembles the virial equation : This similarity is not accidental; indeed, substituting ( 12 ) in the relations above for the thermodynamic parameters (Equations 7 , 9 and 10 ) yields the corresponding virial expansions. [ 7 ] The auxiliary function y ( r ) {\displaystyle y(r)} is known as the cavity distribution function . [ 5 ] : Table 4.1 It has been shown that for classical fluids at a fixed density and a fixed positive temperature, the effective pair potential that generates a given g ( r ) {\displaystyle g(r)} under equilibrium is unique up to an additive constant, if it exists. [ 8 ] In recent years, some attention has been given to develop pair correlation functions for spatially-discrete data such as lattices or networks. [ 9 ] One can determine g ( r ) {\displaystyle g(r)} indirectly (via its relation with the structure factor S ( q ) {\displaystyle S(q)} ) using neutron scattering or x-ray scattering data. The technique can be used at very short length scales (down to the atomic level [ 10 ] ) but involves significant space and time averaging (over the sample size and the acquisition time, respectively). In this way, the radial distribution function has been determined for a wide variety of systems, ranging from liquid metals [ 11 ] to charged colloids. [ 12 ] Going from the experimental S ( q ) {\displaystyle S(q)} to g ( r ) {\displaystyle g(r)} is not straightforward and the analysis can be quite involved. [ 13 ] It is also possible to calculate g ( r ) {\displaystyle g(r)} directly by extracting particle positions from traditional or confocal microscopy. [ 14 ] This technique is limited to particles large enough for optical detection (in the micrometer range), but it has the advantage of being time-resolved so that, aside from the statical information, it also gives access to dynamical parameters (e.g. diffusion constants [ 15 ] ) and also space-resolved (to the level of the individual particle), allowing it to reveal the morphology and dynamics of local structures in colloidal crystals, [ 16 ] glasses, [ 17 ] [ 18 ] gels, [ 19 ] [ 20 ] and hydrodynamic interactions. [ 21 ] Direct visualization of a full (distance-dependent and angle-dependent) pair correlation function was achieved by a scanning tunneling microscopy in the case of 2D molecular gases. [ 22 ] It has been noted that radial distribution functions alone are insufficient to characterize structural information. Distinct point processes may possess identical or practically indistinguishable radial distribution functions, known as the degeneracy problem. [ 23 ] [ 24 ] In such cases, higher order correlation functions are needed to further describe the structure. Higher-order distribution functions g ( k ) {\displaystyle \textstyle g^{(k)}} with k > 2 {\displaystyle \textstyle k>2} were less studied, since they are generally less important for the thermodynamics of the system; at the same time, they are not accessible by conventional scattering techniques. They can however be measured by coherent X-ray scattering and are interesting insofar as they can reveal local symmetries in disordered systems. [ 25 ]
https://en.wikipedia.org/wiki/Radial_distribution_function
Radial dysplasia , also known as radial club hand or radial longitudinal deficiency , is a congenital difference occurring in a longitudinal direction resulting in radial deviation of the wrist and shortening of the forearm. It can occur in different ways, from a minor anomaly to complete absence of the radius , radial side of the carpal bones and thumb. [ 1 ] Hypoplasia of the distal humerus may be present as well and can lead to stiffness of the elbow. [ 2 ] Radial deviation of the wrist is caused by lack of support to the carpus, radial deviation may be reinforced if forearm muscles are functioning poorly or have abnormal insertions. [ 3 ] Although radial longitudinal deficiency is often bilateral, the extent of involvement is most often asymmetric. [ 1 ] The incidence is between 1:30,000 and 1:100,000 and it is more often a sporadic mutation rather than an inherited condition. [ 1 ] [ 3 ] It is one of the possible co occurring birth defects of the embryonic mesoderm within VACTERL association . In case of an inherited condition, several syndromes are known for an association with radial dysplasia, such as the cardiovascular Holt–Oram syndrome and the hematologic Fanconi anemia and TAR syndrome . [ 1 ] Other possible causes are an injury to the apical ectodermal ridge during upper limb development, [ 2 ] intrauterine compression, or maternal drug use ( thalidomide ). [ 3 ] Classification of radial dysplasia is practised through different models. Some only include the different deformities or absences of the radius, where others also include anomalies of the thumb and carpal bones. The Bayne and Klug classification discriminates four different types of radial dysplasia. [ 4 ] A fifth type was added by Goldfarb et al. describing a radial dysplasia with participation of the humerus. [ 4 ] In this classification only anomalies of the radius and the humerus are taken in consideration. James and colleagues expanded this classification by including deficiencies of the carpal bones with a normal distal radius length as type 0 and isolated thumb anomalies as type N. [ 4 ] Type N: Isolated thumb anomaly Type 0: Deficiency of the carpal bones Type I: Short distal radius Type II: Hypoplastic radius in miniature Type III: Absent distal radius Type IV: Complete absent radius Type V: Complete absent radius and manifestations in the proximal humerus The term absent radius can refer to the last 3 types. In cases of a minor deviation of the wrist, treatment by splinting and stretching alone may be a sufficient approach in treating the radial deviation in RD. Besides that, the parent can support this treatment by performing passive exercises of the hand. This will help to stretch the wrist and also possibly correct any extension contracture of the elbow. Furthermore, splinting is used as a postoperative measure trying to avoid a relapse of the radial deviation. [ 3 ] More severe types (Bayne type III en IV) of radial dysplasia can be treated with surgical intervention. The main goal of centralization is to increase hand function by positioning the hand over the distal ulna, and stabilizing the wrist in straight position. Splinting or soft-tissue distraction may be used preceding the centralization. In classic centralization central portions of the carpus are removed to create a notch for placement of the ulna. [ 5 ] A different approach is to place the metacarpal of the middle finger in line with the ulna with a fixation pin. [ 1 ] [ 3 ] If radial tissues are still too short after soft-tissue stretching, soft tissue release and different approaches for manipulation of the forearm bones may be used to enable the placement of the hand onto the ulna. Possible approaches are shortening of the ulna by resection of a segment, or removing carpal bones. [ 6 ] If the ulna is significantly bent, osteotomy may be needed to straighten the ulna. [ 1 ] After placing the wrist in the correct position, radial wrist extensors are transferred to the extensor carpi ulnaris tendon, to help stabilize the wrist in straight position. [ 2 ] If the thumb or its carpometacarpal joint is absent, centralization can be followed by pollicization . Postoperatively, a long arm plaster splinter has to be worn for at least 6 to 8 weeks. A removable splint is often worn for a long period of time. [ 3 ] Radial angulation of the hand enables patients with stiff elbows to reach their mouth for feeding; therefore treatment is contraindicated in cases of extension contracture of the elbow. [ 2 ] [ 3 ] A risk of centralization is that the procedure may cause injury to the ulnar physis, leading to early epiphyseal arrest of the ulna, and thereby resulting in an even shorter forearm. [ 1 ] [ 3 ] Sestero et al. reported that ulnar growth after centralization reaches from 48% to 58% of normal ulnar length, while ulnar growth in untreated patients reaches 64% of normal ulnar length. [ 7 ] Several reviews note that centralization can only partially correct radial deviation of the wrist and that studies with longterm follow-up show relapse of radial deviation. [ 6 ] [ 8 ] Buck-Gramcko described another operation technique, for treatment of radial dysplasia, which is called radialization. During radialization the metacarpal of the index finger is pinned onto the ulna and radial wrist extensors are attached to the ulnar side of the wrist, causing overcorrection or ulnar deviation. This overcorrection is believed to make relapse of radial deviation less likely. [ 1 ] Villki reported a different approach in During this procedure a vascularised MTP-joint of the second toe is transferred to the radial side of ulna, creating a platform that provides radial support for the wrist. The graft is vascularised and therefore maintains its ability to join the growth of the supporting ulna. [ 6 ] Prior to the actual transfer of the MTP-joint of the second toe soft-tissue distraction of the wrist is required to create enough space to place the MTP joint. When after several weeks enough space has been created through distraction, the actual transfer of the MTP joint can be initiated. During this surgical intervention the wrist and the second toe are prepared for transfer at the same time. The ipsilateral second toe MTP joint, together with its metatarsal arteries, its extensor and flexor tendons and its dorsal nerves to the skin, is harvested for transfer. The distal and middle phalanx of the toe are removed. The transferred toe, consisting of the metatarsal and proximal phalanx, is fixed between the physis of the ulna and the second metacarpal, or the scaphoid. The tendons of the toe are attached to those of the radial flexor and extensors muscles of the wrist to create more stability to the MTP joint. K-wires are placed to fixate the bones in the desired position. Once the bones are secured anastomosis are made between the vessels of the toe and the vessels of the forearm. After revascularization of the toe, the skin paddle is placed and the skin is closed. [ 9 ] Vilkki et al. have conducted a study on 19 forearms treated with vascularized MTP-joint transfer with a mean follow-up of 11 years which reports an ulnar length of 67% compared to the contralateral side. [ 9 ] De Jong et al. described in a review that compared to study outcomes on centralization, Vilkki reported a smaller deviation postoperatively and a lower severity of the relapse. [ 6 ]
https://en.wikipedia.org/wiki/Radial_dysplasia
Radial immunodiffusion (RID), Mancini immunodiffusion or single radial immunodiffusion assay, is an older immunodiffusion technique used in immunology to determine the quantity or concentration of an antigen in a sample. [ 1 ] A solution containing antibody is added to a heated medium such as agar or agarose dissolved in buffered normal saline . The molten medium is then poured onto a microscope slide or into an open container, such as a Petri dish , and allowed to cool and form a gel . A solution containing the antigen is then placed in a well that is punched into the gel. The slide or container is then covered, closed or placed in a humidity box to prevent evaporation. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The antigen diffuses radially into the medium, forming a circle of precipitin that marks the boundary between the antibody and the antigen. [ 2 ] [ 3 ] The diameter of the circle increases with time as the antigen diffuses into the medium, reacts with the antibody, and forms insoluble precipitin complexes . [ 2 ] [ 3 ] [ 6 ] The antigen is quantitated by measuring the diameter of the precipitin circle and comparing it with the diameters of precipitin circles formed by known quantities or concentrations of the antigen. [ 2 ] [ 3 ] [ 4 ] [ 7 ] Antigen-antibody complexes are small and soluble when in antigen excess. Therefore, precipitation near the center of the circle is usually less dense than it is near the circle's outer edge, where antigen is less concentrated. [ 2 ] [ 3 ] Expansion of the circle reaches an endpoint and stops when free antigen is depleted and when antigen and antibody reach equivalence. [ 2 ] [ 3 ] [ 6 ] However, the clarity and density of the circle's outer edge may continue to increase after the circle stops expanding. [ 2 ] For most antigens, the area and the square of the diameter of the circle at the circle's endpoint are directly proportional to the initial quantity of antigen and are inversely proportional to the concentration of antibody. [ 2 ] [ 3 ] [ 6 ] Therefore, a graph that compares the quantities or concentrations of antigen in the original samples with the areas or the squares of the diameters of the precipitin circles on a best-fit line plot will usually be a straight line after all circles have reached their endpoints (equivalence method). [ 2 ] [ 4 ] [ 6 ] [ 7 ] Circles that small quantities of antigen create reach their endpoints before circles that large quantities create do so. [ 2 ] [ 3 ] [ 6 ] Therefore, if areas or diameters of circles are measured while some, but not all, circles have stopped expanding, such a graph will be straight in the portion whose wells initially contained the smaller quantities or concentrations of antigen and will be curved in the portion whose wells contained the larger quantities or concentrations. [ 2 ] [ 6 ] While circles are still expanding, a graph that compares the initial quantities or concentrations of the antigen on a logarithmic scale with the diameters or areas of the circles on a linear scale may be a straight line (kinetic method). [ 2 ] [ 3 ] [ 5 ] [ 6 ] [ 7 ] [ 10 ] However, circles of the precipitate are smaller and less distinct during expansion than they are after expansion has ended. [ 2 ] [ 6 ] Further, temperature affects the rate of expansion, but does not affect the size of a circle at its endpoint. [ 2 ] In addition, the range of circle diameters for the same initial quantities or concentrations of antigen is smaller while some circles are enlarging than they are after all circles have reached their endpoints. [ 2 ] [ 6 ] The quantity and concentration of insoluble antigen-antibody complexes at the outer edge of the circle increase with time. [ 2 ] The clarity and density of the circle's outer edge therefore also increase with time. [ 2 ] As a result, measurements of the sizes of circles and graphs produced from these measurements are often more accurate after circles have stopped expanding than they are when circles are still enlarging. [ 2 ] For those reasons, it is often more desirable to take measurements after all circles have reached their endpoints than it is to take measurements while some or all circles are still enlarging. [ 2 ] Measurements of large circles are more accurate than are those of small circles. [ 2 ] [ 11 ] It is therefore often desirable to adjust the concentration of antibody and the initial quantities of antigen to assure that precipitin rings will be large. [ 2 ] One can determine the antigen concentration in a sample whose concentration is unknown by finding its location on a graph that charts the diameters of precipitin circles produced by three or more reference samples with known antigen concentrations. Two techniques often produce straight lines on such graphs. The techniques produce those lines on different types of graphs. The techniques and their graphs are:
https://en.wikipedia.org/wiki/Radial_immunodiffusion
In mathematics , a subset A ⊆ X {\displaystyle A\subseteq X} of a linear space X {\displaystyle X} is radial at a given point a 0 ∈ A {\displaystyle a_{0}\in A} if for every x ∈ X {\displaystyle x\in X} there exists a real t x > 0 {\displaystyle t_{x}>0} such that for every t ∈ [ 0 , t x ] , {\displaystyle t\in [0,t_{x}],} a 0 + t x ∈ A . {\displaystyle a_{0}+tx\in A.} [ 1 ] Geometrically, this means A {\displaystyle A} is radial at a 0 {\displaystyle a_{0}} if for every x ∈ X , {\displaystyle x\in X,} there is some (non-degenerate) line segment (depend on x {\displaystyle x} ) emanating from a 0 {\displaystyle a_{0}} in the direction of x {\displaystyle x} that lies entirely in A . {\displaystyle A.} Every radial set is a star domain [ clarification needed ] although not conversely. The points at which a set is radial are called internal points . [ 2 ] [ 3 ] The set of all points at which A ⊆ X {\displaystyle A\subseteq X} is radial is equal to the algebraic interior . [ 1 ] [ 4 ] Every absorbing subset is radial at the origin a 0 = 0 , {\displaystyle a_{0}=0,} and if the vector space is real then the converse also holds. That is, a subset of a real vector space is absorbing if and only if it is radial at the origin. Some authors use the term radial as a synonym for absorbing . [ 5 ] This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radial_set
The radial spoke is a multi-unit protein structure found in the axonemes of eukaryotic cilia and flagella . [ 1 ] Although experiments have determined the importance of the radial spoke in the proper function of these organelles , its structure and mode of action remain poorly understood. Radial spokes are T-shaped structures present inside the axoneme. Each spoke consists of a "head" and a "stalk," while each of these sub-structures is itself made up of many protein subunits. [ 2 ] In all, the radial spoke is known to contain at least seventeen proteins, [ 3 ] five in the head and twelve in the stalk. The spoke stalk binds to the A-tubule of each microtubule outer doublet, and the spoke head faces in towards the center of the axoneme (see illustration at right). The radial spoke is known to play a role in the mechanical movement of the flagellum/cilium. For example, mutant organisms lacking properly functioning radial spokes have flagella and cilia that are immotile. Radial spokes also influence the cilium "waveform"; that is, the exact bending pattern the cilium repeats. How the radial spoke carries out this function is poorly understood. Radial spokes are believed to interact with both the central pair microtubules and the dynein arms, perhaps in a way that maintains the rhythmic activation of the dynein motors . For example, one of the radial spoke subunits, RSP3, is an anchor protein predicted to hold another protein called protein kinase A (PKA). PKA would theoretically then be able to activate/inactivate the adjacent dynein arms via its kinase activity. Human axonemal radial spoke subunits: RSPH1 , RSPH3 , RSPH4A , RSPH6A , RSPH9 , RSPH10B , RSPH10B2 , RSPH14 . [ 4 ]
https://en.wikipedia.org/wiki/Radial_spoke
The radial velocity or line-of-sight velocity of a target with respect to an observer is the rate of change of the vector displacement between the two points. It is formulated as the vector projection of the target-observer relative velocity onto the relative direction or line-of-sight (LOS) connecting the two points. The radial speed or range rate is the temporal rate of the distance or range between the two points. It is a signed scalar quantity , formulated as the scalar projection of the relative velocity vector onto the LOS direction. Equivalently, radial speed equals the norm of the radial velocity, modulo the sign. [ a ] In astronomy, the point is usually taken to be the observer on Earth, so the radial velocity then denotes the speed with which the object moves away from the Earth (or approaches it, for a negative radial velocity). Given a differentiable vector r ∈ R 3 {\displaystyle \mathbf {r} \in \mathbb {R} ^{3}} defining the instantaneous relative position of a target with respect to an observer. Let the instantaneous relative velocity of the target with respect to the observer be The magnitude of the position vector r {\displaystyle \mathbf {r} } is defined as in terms of the inner product The quantity range rate is the time derivative of the magnitude ( norm ) of r {\displaystyle \mathbf {r} } , expressed as Substituting ( 2 ) into ( 3 ) Evaluating the derivative of the right-hand-side by the chain rule using ( 1 ) the expression becomes By reciprocity, [ 1 ] ⟨ v , r ⟩ = ⟨ r , v ⟩ {\displaystyle \langle \mathbf {v} ,\mathbf {r} \rangle =\langle \mathbf {r} ,\mathbf {v} \rangle } . Defining the unit relative position vector r ^ = r / r {\displaystyle {\hat {r}}=\mathbf {r} /{r}} (or LOS direction), the range rate is simply expressed as i.e., the projection of the relative velocity vector onto the LOS direction. Further defining the velocity direction v ^ = v / v {\displaystyle {\hat {v}}=\mathbf {v} /{v}} , with the relative speed v = ‖ v ‖ {\displaystyle v=\|\mathbf {v} \|} , we have: where the inner product is either +1 or -1, for parallel and antiparallel vectors , respectively. A singularity exists for coincident observer target, i.e., r = 0 {\displaystyle r=0} ; in this case, range rate is undefined. In astronomy, radial velocity is often measured to the first order of approximation by Doppler spectroscopy . The quantity obtained by this method may be called the barycentric radial-velocity measure or spectroscopic radial velocity. [ 2 ] However, due to relativistic and cosmological effects over the great distances that light typically travels to reach the observer from an astronomical object, this measure cannot be accurately transformed to a geometric radial velocity without additional assumptions about the object and the space between it and the observer. [ 3 ] By contrast, astrometric radial velocity is determined by astrometric observations (for example, a secular change in the annual parallax ). [ 3 ] [ 4 ] [ 5 ] Light from an object with a substantial relative radial velocity at emission will be subject to the Doppler effect , so the frequency of the light decreases for objects that were receding ( redshift ) and increases for objects that were approaching ( blueshift ). The radial velocity of a star or other luminous distant objects can be measured accurately by taking a high-resolution spectrum and comparing the measured wavelengths of known spectral lines to wavelengths from laboratory measurements. A positive radial velocity indicates the distance between the objects is or was increasing; a negative radial velocity indicates the distance between the source and observer is or was decreasing. William Huggins ventured in 1868 to estimate the radial velocity of Sirius with respect to the Sun, based on observed redshift of the star's light. [ 6 ] In many binary stars , the orbital motion usually causes radial velocity variations of several kilometres per second (km/s). As the spectra of these stars vary due to the Doppler effect, they are called spectroscopic binaries . Radial velocity can be used to estimate the ratio of the masses of the stars, and some orbital elements , such as eccentricity and semimajor axis . The same method has also been used to detect planets around stars, in the way that the movement's measurement determines the planet's orbital period, while the resulting radial-velocity amplitude allows the calculation of the lower bound on a planet's mass using the binary mass function . Radial velocity methods alone may only reveal a lower bound, since a large planet orbiting at a very high angle to the line of sight will perturb its star radially as much as a much smaller planet with an orbital plane on the line of sight. It has been suggested that planets with high eccentricities calculated by this method may in fact be two-planet systems of circular or near-circular resonant orbit. [ 7 ] [ 8 ] The radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an (unseen) exoplanet as it orbits the star. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. By regularly looking at the spectrum of a star—and so, measuring its velocity—it can be determined if it moves periodically due to the influence of an exoplanet companion. From the instrumental perspective, velocities are measured relative to the telescope's motion. So an important first step of the data reduction is to remove the contributions of
https://en.wikipedia.org/wiki/Radial_velocity
In radiometry , radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiation , and to quantify emission of neutrinos and other particles. The SI unit of radiance is the watt per steradian per square metre ( W·sr −1 ·m −2 ). It is a directional quantity: the radiance of a surface depends on the direction from which it is being observed. The related quantity spectral radiance is the radiance of a surface per unit frequency or wavelength , depending on whether the spectrum is taken as a function of frequency or of wavelength. Historically, radiance was called "intensity" and spectral radiance was called "specific intensity". Many fields still use this nomenclature. It is especially dominant in heat transfer , astrophysics and astronomy . "Intensity" has many other meanings in physics , with the most common being power per unit area (so the radiance is the intensity per solid angle in this case). Radiance is useful because it indicates how much of the power emitted, reflected, transmitted or received by a surface will be received by an optical system looking at that surface from a specified angle of view. In this case, the solid angle of interest is the solid angle subtended by the optical system's entrance pupil . Since the eye is an optical system, radiance and its cousin luminance are good indicators of how bright an object will appear. For this reason, radiance and luminance are both sometimes called "brightness". This usage is now discouraged (see the article Brightness for a discussion). The nonstandard usage of "brightness" for "radiance" persists in some fields, notably laser physics . The radiance divided by the index of refraction squared is invariant in geometric optics . This means that for an ideal optical system in air, the radiance at the output is the same as the input radiance. This is sometimes called conservation of radiance . For real, passive, optical systems, the output radiance is at most equal to the input, unless the index of refraction changes. As an example, if you form a demagnified image with a lens, the optical power is concentrated into a smaller area, so the irradiance is higher at the image. The light at the image plane, however, fills a larger solid angle so the radiance comes out to be the same assuming there is no loss at the lens. Spectral radiance expresses radiance as a function of frequency or wavelength. Radiance is the integral of the spectral radiance over all frequencies or wavelengths. For radiation emitted by the surface of an ideal black body at a given temperature, spectral radiance is governed by Planck's law , while the integral of its radiance, over the hemisphere into which its surface radiates, is given by the Stefan–Boltzmann law . Its surface is Lambertian , so that its radiance is uniform with respect to angle of view, and is simply the Stefan–Boltzmann integral divided by π. This factor is obtained from the solid angle 2π steradians of a hemisphere decreased by integration over the cosine of the zenith angle . Radiance of a surface , denoted L e,Ω ("e" for "energetic", to avoid confusion with photometric quantities, and "Ω" to indicate this is a directional quantity), is defined as [ 1 ] where In general L e,Ω is a function of viewing direction, depending on θ through cos θ and azimuth angle through ∂Φ e /∂Ω . For the special case of a Lambertian surface , ∂ 2 Φ e /(∂Ω ∂ A ) is proportional to cos θ , and L e,Ω is isotropic (independent of viewing direction). When calculating the radiance emitted by a source, A refers to an area on the surface of the source, and Ω to the solid angle into which the light is emitted. When calculating radiance received by a detector, A refers to an area on the surface of the detector and Ω to the solid angle subtended by the source as viewed from that detector. When radiance is conserved, as discussed above, the radiance emitted by a source is the same as that received by a detector observing it. Spectral radiance in frequency of a surface , denoted L e,Ω,ν , is defined as [ 1 ] where ν is the frequency. Spectral radiance in wavelength of a surface , denoted L e,Ω,λ , is defined as [ 1 ] where λ is the wavelength. Radiance of a surface is related to étendue by where As the light travels through an ideal optical system, both the étendue and the radiant flux are conserved. Therefore, basic radiance defined by [ 2 ] is also conserved. In real systems, the étendue may increase (for example due to scattering) or the radiant flux may decrease (for example due to absorption) and, therefore, basic radiance may decrease. However, étendue may not decrease and radiant flux may not increase and, therefore, basic radiance may not increase.
https://en.wikipedia.org/wiki/Radiance
Radiant heating and cooling is a category of HVAC technologies that exchange heat by both convection and radiation with the environments they are designed to heat or cool. There are many subcategories of radiant heating and cooling, including: "radiant ceiling panels", [ 1 ] "embedded surface systems", [ 1 ] "thermally active building systems", [ 1 ] and infrared heaters . According to some definitions, a technology is only included in this category if radiation comprises more than 50% of its heat exchange with the environment; [ 2 ] therefore technologies such as radiators and chilled beams (which may also involve radiation heat transfer) are usually not considered radiant heating or cooling. Within this category, it is practical to distinguish between high temperature radiant heating (devices with emitting source temperature >≈300 °F), and radiant heating or cooling with more moderate source temperatures. This article mainly addresses radiant heating and cooling with moderate source temperatures, used to heat or cool indoor environments. Moderate temperature radiant heating and cooling is usually composed of relatively large surfaces that are internally heated or cooled using hydronic or electrical sources. For high temperature indoor or outdoor radiant heating, see: Infrared heater . For snow melt applications see: Snowmelt system . Radiant heating and cooling originated as separate systems but now share a similar form. Radiant heating has a long history in Asia and Europe. The earliest systems, from as early as 5000 BC, were found in northern China and Korea. Archaeological findings show kang and dikang, heated beds and floors in ancient Chinese homes. Kang originated in the 11th century BC as “to dry” later evolving into a heated bed, while dikang expanded this concept to a heated floor. In Korea, the ondol system, meaning "warm stone," used flues beneath the floor to channel smoke from a kitchen stove, heating flat stones that radiated heat into the room above. Over time, the ondol system adapted to use coal and later transitioned to water-based systems in the 20th century, remaining a common heating system in Korean buildings. [ 3 ] In Europe, the Roman hypocaust system, developed around the 3rd century BC, was an early radiant heating method using a furnace connected to underfloor and wall flues to circulate hot air in public baths and villas . This technology spread across the Roman Empire but declined after its fall, replaced by simpler fireplaces in the Middle Ages. In this period, systems like the Kachelofen from Austria and Germany used thermal masses for efficient heat storage and distribution. During the 18th century, radiant heating gained renewed use in Europe, driven by advancements in thermal storage techniques, such as heated flues for efficient heat distribution and a better understanding of how materials retain and transfer heat. In the early 19th century, developments in water-based systems with embedded hot water pipes paved the way for modern radiant heating, providing indoor comfort through heat transfer. [ 4 ] Radiant cooling also has ancient roots. In the 8th century, Mesopotamian builders used snow-packed walls to cool indoor space. The concept resurfaced in the 20th century with hydronic cooling systems in Europe, embedding cool water pipes in structures to absorb and dissipate heat, meeting cooling loads. [ 4 ] [ 5 ] Radiant cooling became more widely adopted in the 1990s, with the implementation of floor cooling. [ 6 ] Today, modern radiant systems typically use water as a thermal medium for efficient heat transfer and are widely adopted in residential, commercial, and industrial buildings. While valued for its potential to enhance energy efficiency, quiet operation, and thermal comfort , [ 7 ] their performance varies with design and application, leading to ongoing discussions. [ 8 ] Radiant heating is a technology for heating indoor and outdoor areas. Heating by radiant energy is observed every day, the warmth of the sunshine being the most commonly observed example. Radiant heating as a technology is more narrowly defined. It is the method of intentionally using the principles of radiant heat to transfer radiant energy from an emitting heat source to an object. Designs with radiant heating are seen as replacements for conventional convection heating as well as a way of supplying confined outdoor heating. The heat energy is emitted from a warm element, such as a floor, wall or overhead panel, and warms people and other objects in rooms rather than directly heating the air. The internal air temperature for radiant heated buildings may be lower than for a conventionally heated building to achieve the same level of body comfort, when adjusted so the perceived temperature is actually the same. One of the key advantages of radiant heating systems is a much decreased circulation of air inside the room and the corresponding spreading of airborne particles. Radiant heating systems can be divided into: Underfloor and wall heating systems often are called low-temperature systems. Since their heating surface is much larger than other systems, a much lower temperature is required to achieve the same level of heat transfer . This provides an improved room climate with healthier humidity levels. The lower temperatures and large surface area of underfloor heating systems make them ideal heat emitters for air source heat pumps , evenly and effectively radiating the heat energy from the system into rooms within a home. The maximum temperature of the heating surface can vary from 29–35 °C (84–95 °F) depending on the room type. Radiant overhead panels are mostly used in production and warehousing facilities or sports centers; they hang a few meters above the floor and their surface temperatures are much higher. In the case of heating outdoor areas, the surrounding air is constantly moving. Relying on convection heating is in most cases impractical, the reason being that, once you heat the outside air, it will blow away with air movement. Even in a no-wind condition, the buoyancy effects will carry away the hot air. Outdoor radiant heaters allow specific spaces within an outdoor area to be targeted, warming only the people and objects in their path. Radiant heating systems may be gas-fired or use electric infrared heating elements. An example of the overhead radiant heaters are the patio heaters often used with outdoor serving. The top metal disc reflects the radiant heat onto a small area. Radiant cooling is the use of cooled surfaces to remove sensible heat primarily by thermal radiation and only secondarily by other methods like convection . Radiant systems that use water to cool the radiant surfaces are examples of hydronic systems. Unlike “all-air” air conditioning systems that circulate cooled air only, hydronic radiant systems circulate cooled water in pipes through specially-mounted panels on a building's floor or ceiling to provide comfortable temperatures. There is a separate system to provide air for ventilation , dehumidification , and potentially additionally cooling. [ 9 ] Radiant systems are less common than all-air systems for cooling, but can have advantages compared to all-air systems in some applications. [ 10 ] [ 11 ] [ 12 ] Since the majority of the cooling process results from removing sensible heat through radiant exchange with people and objects and not air, occupant thermal comfort can be achieved with warmer interior air temperatures than with air based cooling systems. Radiant cooling systems potentially offer reductions in cooling energy consumption. [ 10 ] The latent loads (humidity) from occupants, infiltration and processes generally need to be managed by an independent system. Radiant cooling may also be integrated with other energy-efficient strategies such as night time flushing, indirect evaporative cooling , or ground source heat pumps as it requires a small difference in temperature between desired indoor air temperature and the cooled surface. [ 13 ] Passive daytime radiative cooling uses a material that fluoresces in the infrared atmospheric window , a frequency range where the atmosphere is unusually transparent, so that the energy goes straight out to space. This can cool the heat-fluorescent object to below ambient air temperature, even in full sun. [ 14 ] [ 15 ] [ 16 ] Radiant cooling systems offer lower energy consumption than conventional cooling systems based on research conducted by the Lawrence Berkeley National Laboratory . Radiant cooling energy savings depend on the climate, but on average across the US savings are in the range of 30% compared to conventional systems. Cool, humid regions might have savings of 17% while hot, arid regions have savings of 42%. [ 10 ] Hot, dry climates offer the greatest advantage for radiant cooling as they have the largest proportion of cooling by way of removing sensible heat. While this research is informative, more research needs to be done to account for the limitations of simulation tools and integrated system approaches. Much of the energy savings is also attributed to the lower amount of energy required to pump water as opposed to distribute air with fans. By coupling the system with building mass, radiant cooling can shift some cooling to off-peak night time hours. Radiant cooling appears to have lower first costs [ 17 ] and lifecycle costs compared to conventional systems. Lower first costs are largely attributed to integration with structure and design elements, while lower life cycle costs result from decreased maintenance. However, a recent study on comparison of VAV reheat versus active chilled beams & DOAS challenged the claims of lower first cost due to added cost of piping [ 18 ] Because of the potential for condensate formation on the cold radiant surface (resulting in water damage, mold and the like), radiant cooling systems have not been widely applied. Condensation caused by humidity is a limiting factor for the cooling capacity of a radiant cooling system. The surface temperature should not be equal or below the dew point temperature in the space. Some standards suggest a limit for the relative humidity in a space to 60% or 70%. An air temperature of 26 °C (79 °F) would mean a dew point between 17 and 20 °C (63 and 68 °F). [ 13 ] There is, however, evidence that suggests decreasing the surface temperature to below the dew point temperature for a short period of time may not cause condensation . [ 17 ] Also, the use of an additional system, such as a dehumidifier or DOAS , can limit humidity and allow for increased cooling capacity. Radiant systems, encompassing both heating and cooling, transfer heat or coolness directly through surfaces, such as floors, ceilings, or walls, instead of relying on forced-air systems. These systems are broadly categorized into three types: [ 19 ] thermally activated building systems (TABS), [ 20 ] embedded surface systems, and radiant ceiling panels. Radiant cooling from a slab can be delivered to a space from the floor or ceiling. Since radiant heating systems tend to be in the floor, the obvious choice would be to use the same circulation system for cooled water. While this makes sense in some cases, delivering cooling from the ceiling has several advantages. First, it is easier to leave ceilings exposed to a room than floors, increasing the effectiveness of thermal mass. Floors offer the downside of coverings and furnishings that decrease the effectiveness of the system. Second, greater convective heat exchange occurs through a chilled ceiling as warm air rises, leading to more air coming in contact with the cooled surface. Cooling delivered through the floor makes the most sense when there is a high amount of solar gain from sun penetration, because the cool floor can more easily remove those loads than the ceiling. [ 13 ] Chilled slabs, compared to panels, offer more significant thermal mass and therefore can take better advantage of outside diurnal temperatures swings. Chilled slabs cost less per unit of surface area, and are more integrated with structure. Chilled beams are hybrid systems that combine radiant and convective heat transfer. While not purely radiant, they are suited for spaces with varying thermal loads and integrate well with ceilings for flexible placement and ventilation. [ 9 ] The operative temperature is an indicator of thermal comfort which takes into account the effects of both convection and radiation. Operative temperature is defined as a uniform temperature of a radiantly black enclosure in which an occupant would exchange the same amount of heat by radiation plus convection as in the actual nonuniform environment. With radiant systems, thermal comfort is achieved at warmer interior temp than all-air systems for cooling scenario, and at lower temperature than all-air systems for heating scenario. [ 21 ] Thus, radiant systems can helps to achieve energy savings in building operation while maintaining the wished comfort level. Based on a large study performed using Center for the Built Environment 's Indoor environmental quality (IEQ) occupant survey to compare occupant satisfaction in radiant and all-air conditioned buildings, both systems create equal indoor environmental conditions, including acoustic satisfaction, with a tendency towards improved temperature satisfaction in radiant buildings. [ 22 ] The radiant temperature asymmetry is defined as the difference between the plane radiant temperature of the two opposite sides of a small plane element. As regards occupants within a building, thermal radiation field around the body may be non-uniform due to hot and cold surfaces and direct sunlight, bringing therefore local discomfort. The norm ISO 7730 and the ASHRAE 55 standard give the predicted percentage of dissatisfied occupants (PPD) as a function of the radiant temperature asymmetry and specify the acceptable limits. In general, people are more sensitive to asymmetric radiation caused by a warm ceiling than that caused by hot and cold vertical surfaces. The detailed calculation method of percentage dissatisfied due to a radiant temperature asymmetry is described in ISO 7730. While specific design requirements will depend on the type of radiant system, a few issues are common to most radiant systems. Heating, Ventilation, and Air Conditioning (HVAC) systems require a control system to supply heating or cooling to a space. The control strategies applied depend on the type of HVAC system used, and these strategies ultimately determine the system's energy consumption. [ 25 ] Radiant systems differ from other HVAC systems in terms of heat transfer mechanisms and the potential risk of condensation , requiring tailored control strategies to address these unique characteristics. Radiant systems transfer heat by heating or cooling structural elements, such as concrete slabs or ceilings , rather than directly delivering hot or cold air. These elements primarily release heat through radiation. The response time—the time it takes for the system to reach the setpoint temperature—depends on the material's thermal mass : low thermal mass materials, such as metal panels, respond quickly, while high thermal mass materials, such as concrete slabs, adjust more slowly. When integrated with high thermal mass elements, radiant systems face challenges due to delayed temperature adjustments. This delay can lead to over-adjustments, resulting in increased energy consumption and reduced thermal comfort . [ 26 ] To address this problem, model Predictive Control (MPC) is often employed to predict future thermal demands and adjust heat supply proactively. For instance, MPC leverages the thermal mass of radiant systems by storing heat during off-peak times, before it is needed. This allows operations to start at night, when electricity costs and urban electricity grid loads are lower. Additionally, cooler nighttime air improves the efficiency of cooling equipment, such as air-source heat pumps , further optimizing energy use. By employing these strategies, radiant systems effectively overcome thermal mass challenges while reducing daytime electricity demand, enhancing grid stability, and lowering operational costs. [ 27 ] Radiant cooling systems can experience condensation when the surface temperature drops below the dew point of the surrounding air. This may cause occupant discomfort, promote mold growth, and damage radiant surfaces. [ 28 ] The risk is particularly high in humid climates , where warm, moist air enters through open windows and contacts cold radiant cooling surfaces. To prevent this, radiant cooling systems must be paired with effective ventilation strategies to control indoor humidity levels. Radiant cooling systems are usually hydronic , cooling using circulating water running in pipes in thermal contact with the surface. Typically the circulating water only needs to be 2–4 °C below the desired indoor air temperature. [ 13 ] Once having been absorbed by the actively cooled surface, heat is removed by water flowing through a hydronic circuit, replacing the warmed water with cooler water. Depending on the position of the pipes in the building construction, hydronic radiant systems can be sorted into 4 main categories: The norm ISO 11855-2 [ 30 ] focuses on embedded water based surface heating and cooling systems and TABS. Depending on construction details, this norm distinguishes 7 different types of those systems (Types A to G) Radiant systems are associated with low-exergy systems. Low-exergy refers to the possibility to utilize ‘low quality energy’ (i.e. dispersed energy that has little ability to do useful work). Both heating and cooling can in principle be obtained at temperature levels that are close to the ambient environment. The low temperature difference requires that the heat transmission takes place over relative big surfaces as for example applied in ceilings or underfloor heating systems. [ 31 ] Radiant systems using low temperature heating and high temperature cooling are typical example of low-exergy systems. Energy sources such as geothermal (direct cooling / geothermal heat pump heating) and solar hot water are compatible with radiant systems. These sources can lead to important savings in terms of primary energy use for buildings. Some well-known buildings using radiant cooling include Bangkok's Suvarnabhumi Airport , [ 32 ] the Infosys Software Development Building 1 in Hyderabad, IIT Hyderabad , [ 33 ] and the San Francisco Exploratorium . [ 34 ] Radiant cooling is also used in many zero net energy buildings . [ 35 ] [ 36 ] Heat radiation is the energy in the form of electromagnetic waves emitted by a solid, liquid, or gas as a result of its temperature. [ 37 ] In buildings, the radiant heat flow between two internal surfaces (or a surface and a person) is influenced by the emissivity of the heat emitting surface and by the view factor between this surface and the receptive surface (object or person) in the room. [ 38 ] Thermal (longwave) radiation travels at the speed of light, in straight lines. [ 9 ] It can be reflected. People, equipment, and surfaces in buildings will warm up if they absorb thermal radiation, but the radiation does not noticeably heat up the air it is traveling through. [ 9 ] This means heat will flow from objects, occupants, equipment, and lights in a space to a cooled surface as long as their temperatures are warmer than that of the cooled surface and they are within the direct or indirect line of sight of the cooled surface. Some heat is also removed by convection because the air temperature will be lowered when air comes in contact with the cooled surface. The heat transfer by radiation is proportional to the power of four of the absolute surface temperature. The emissivity of a material (usually written ε or e) is the relative ability of its surface to emit energy by radiation. A black body has an emissivity of 1 and a perfect reflector has an emissivity of 0. [ 37 ] In radiative heat transfer, a view factor quantifies the relative importance of the radiation that leaves an object (person or surface) and strikes another one, considering the other surrounding objects. In enclosures, radiation leaving a surface is conserved, therefore, the sum of all view factors associated with a given object is equal to 1. In the case of a room, the view factor of a radiant surface and a person depend on their relative positions. As a person is often changing position and as a room might be occupied by many persons at the same time, diagrams for omnidirectional person can be used. [ 39 ] Response time (τ95), aka time constant , is used to analyze the dynamic thermal performance of radiant systems. The response time for a radiant system is defined as the time it takes for the surface temperature of a radiant system to reach 95% of the difference between its final and initial values when a step change in control of the system is applied as input. [ 40 ] It is mainly influenced by concrete thickness, pipe spacing, and to a less degree, concrete type. It is not affected by pipe diameter, room operative temperature, supply water temperature, and water flow regime. By using response time, radiant systems can be classified into fast response (τ95< 10 min, like RCP), medium response (1 h<τ95<9 h, like Type A, B, D, G) and slow response (9 h< τ95<19 h, like Type E and Type F). [ 40 ] Additionally, floor and ceiling radiant systems have different response times due to different heat transfer coefficients with room thermal environment, and the pipe-embedded position. Fireplaces and woodstoves
https://en.wikipedia.org/wiki/Radiant_cooling
In radiometry , radiant energy density is the radiant energy per unit volume . [ 1 ] The SI unit of radiant energy density is the joule per cubic metre (J/m 3 ). Radiant energy density , denoted w e ("e" for "energetic", to avoid confusion with photometric quantities), is defined as [ 2 ] where Because radiation always transmits the energy, [ 2 ] it is useful to wonder what the speed of the transmission is. If all the radiation at given location propagates in the same direction, then the radiant flux through a unit area perpendicular to the propagation direction is given by the irradiance : [ 2 ] where c is the radiation propagation speed. Contrarily if the radiation intensity is equal in all directions, like in a cavity in a thermodynamic equilibrium , then the energy transmission is best described by radiance : [ 3 ] Radiant exitance through a small opening from such a cavity is: [ 4 ] These relations can be used for example in the black-body radiation equation's derivation.
https://en.wikipedia.org/wiki/Radiant_energy_density
In radiometry , radiant exitance or radiant emittance is the radiant flux emitted by a surface per unit area, whereas spectral exitance or spectral emittance is the radiant exitance of a surface per unit frequency or wavelength , depending on whether the spectrum is taken as a function of frequency or of wavelength. This is the emitted component of radiosity . The SI unit of radiant exitance is the watt per square metre ( W/m 2 ), while that of spectral exitance in frequency is the watt per square metre per hertz (W·m −2 ·Hz −1 ) and that of spectral exitance in wavelength is the watt per square metre per metre (W·m −3 )—commonly the watt per square metre per nanometre ( W·m −2 ·nm −1 ). The CGS unit erg per square centimeter per second ( erg·cm −2 ·s −1 ) is often used in astronomy . Radiant exitance is often called "intensity" in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity . Radiant exitance of a surface , denoted M e ("e" for "energetic", to avoid confusion with photometric quantities), is defined as [ 1 ] M e = ∂ Φ e ∂ A , {\displaystyle M_{\mathrm {e} }={\frac {\partial \Phi _{\mathrm {e} }}{\partial A}},} where ∂ is the partial derivative symbol, Φ e is the radiant flux emitted , and A is the surface area . The radiant flux received by a surface is called irradiance . The radiant exitance of a black surface , according to the Stefan–Boltzmann law , is equal to: M e ∘ = σ T 4 , {\displaystyle M_{\mathrm {e} }^{\circ }=\sigma T^{4},} where σ is the Stefan–Boltzmann constant , and T is the temperature of that surface. For a real surface, the radiant exitance is equal to: M e = ε M e ∘ = ε σ T 4 , {\displaystyle M_{\mathrm {e} }=\varepsilon M_{\mathrm {e} }^{\circ }=\varepsilon \sigma T^{4},} where ε is the emissivity of that surface. Spectral exitance in frequency of a surface , denoted M e,ν , is defined as [ 1 ] where ν is the frequency. Spectral exitance in wavelength of a surface , denoted M e,λ , is defined as [ 1 ] M e , λ = ∂ M e ∂ λ , {\displaystyle M_{\mathrm {e} ,\lambda }={\frac {\partial M_{\mathrm {e} }}{\partial \lambda }},} where λ is the wavelength. The spectral exitance of a black surface around a given frequency or wavelength, according to Lambert's cosine law and Planck's law , is equal to: where h is the Planck constant , ν is the frequency, λ is the wavelength, k is the Boltzmann constant , c is the speed of light in the medium, T is the temperature of that surface. For a real surface, the spectral exitance is equal to: M e , ν = ε M e , ν ∘ = 2 π h ε ν 3 c 2 1 e h ν k T − 1 , M e , λ = ε M e , λ ∘ = 2 π h ε c 2 λ 5 1 e h c λ k T − 1 . {\displaystyle {\begin{aligned}M_{\mathrm {e} ,\nu }&=\varepsilon M_{\mathrm {e} ,\nu }^{\circ }={\frac {2\pi h\varepsilon \nu ^{3}}{c^{2}}}{\frac {1}{e^{\frac {h\nu }{kT}}-1}},\\[8pt]M_{\mathrm {e} ,\lambda }&=\varepsilon M_{\mathrm {e} ,\lambda }^{\circ }={\frac {2\pi h\varepsilon c^{2}}{\lambda ^{5}}}{\frac {1}{e^{\frac {hc}{\lambda kT}}-1}}.\end{aligned}}}
https://en.wikipedia.org/wiki/Radiant_exitance
In radiometry , radiant exposure or fluence is the radiant energy received by a surface per unit area, or equivalently the irradiance of a surface, integrated over time of irradiation, and spectral exposure is the radiant exposure per unit frequency or wavelength , depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant exposure is the joule per square metre ( J/m 2 ), while that of spectral exposure in frequency is the joule per square metre per hertz ( J⋅m −2 ⋅Hz −1 ) and that of spectral exposure in wavelength is the joule per square metre per metre ( J/m 3 )—commonly the joule per square metre per nanometre ( J⋅m −2 ⋅nm −1 ). Radiant exposure of a surface , denoted H e ("e" for "energetic", to avoid confusion with photometric quantities), is defined as [ 1 ] H e = ∂ Q e ∂ A = ∫ 0 T E e ( t ) d t , {\displaystyle H_{\mathrm {e} }={\frac {\partial Q_{\mathrm {e} }}{\partial A}}=\int _{0}^{T}E_{\mathrm {e} }(t)\,\mathrm {d} t,} where Spectral exposure in frequency of a surface , denoted H e, ν , is defined as [ 1 ] H e , ν = ∂ H e ∂ ν , {\displaystyle H_{\mathrm {e} ,\nu }={\frac {\partial H_{\mathrm {e} }}{\partial \nu }},} where ν is the frequency. Spectral exposure in wavelength of a surface , denoted H e, λ , is defined as [ 1 ] H e , λ = ∂ H e ∂ λ , {\displaystyle H_{\mathrm {e} ,\lambda }={\frac {\partial H_{\mathrm {e} }}{\partial \lambda }},} where λ is the wavelength.
https://en.wikipedia.org/wiki/Radiant_exposure
In radiometry , radiant flux or radiant power is the radiant energy emitted, reflected, transmitted, or received per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength , depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), one joule per second ( J/s ), while that of spectral flux in frequency is the watt per hertz ( W/Hz ) and that of spectral flux in wavelength is the watt per metre ( W/m )—commonly the watt per nanometre ( W/nm ). Radiant flux , denoted Φ e ('e' for "energetic", to avoid confusion with photometric quantities), is defined as [ 1 ] Φ e = d Q e d t Q e = ∫ T ∫ Σ S ⋅ n ^ d A d t {\displaystyle {\begin{aligned}\Phi _{\mathrm {e} }&={\frac {dQ_{\mathrm {e} }}{dt}}\\[2pt]Q_{\mathrm {e} }&=\int _{T}\int _{\Sigma }\mathbf {S} \cdot {\hat {\mathbf {n} }}\,dAdt\end{aligned}}} where The rate of energy flow through the surface fluctuates at the frequency of the radiation, but radiation detectors only respond to the average rate of flow. This is represented by replacing the Poynting vector with the time average of its norm, giving Φ e ≈ ∫ Σ ⟨ | S | ⟩ cos ⁡ α d A , {\displaystyle \Phi _{\mathrm {e} }\approx \int _{\Sigma }\langle |\mathbf {S} |\rangle \cos \alpha \ dA,} where ⟨-⟩ is the time average, and α is the angle between n and ⟨ | S | ⟩ . {\displaystyle \langle |\mathbf {S} |\rangle .} Spectral flux in frequency , denoted Φ e, ν , is defined as [ 1 ] Φ e , ν = ∂ Φ e ∂ ν , {\displaystyle \Phi _{\mathrm {e} ,\nu }={\frac {\partial \Phi _{\mathrm {e} }}{\partial \nu }},} where ν is the frequency. Spectral flux in wavelength , denoted Φ e, λ , is defined as [ 1 ] Φ e , λ = ∂ Φ e ∂ λ , {\displaystyle \Phi _{\mathrm {e} ,\lambda }={\frac {\partial \Phi _{\mathrm {e} }}{\partial \lambda }},} where λ is the wavelength.
https://en.wikipedia.org/wiki/Radiant_flux
In radiometry , radiant intensity is the radiant flux emitted, reflected, transmitted or received, per unit solid angle , and spectral intensity is the radiant intensity per unit frequency or wavelength , depending on whether the spectrum is taken as a function of frequency or of wavelength. These are directional quantities. The SI unit of radiant intensity is the watt per steradian ( W/sr ), while that of spectral intensity in frequency is the watt per steradian per hertz ( W·sr −1 ·Hz −1 ) and that of spectral intensity in wavelength is the watt per steradian per metre ( W·sr −1 ·m −1 )—commonly the watt per steradian per nanometre ( W·sr −1 ·nm −1 ). Radiant intensity is distinct from irradiance and radiant exitance , which are often called intensity in branches of physics other than radiometry. In radio-frequency engineering , radiant intensity is sometimes called radiation intensity . Radiant intensity , denoted I e,Ω ("e" for "energetic", to avoid confusion with photometric quantities, and "Ω" to indicate this is a directional quantity), is defined as [ 1 ] where In general, I e,Ω is a function of viewing angle θ and potentially azimuth angle . For the special case of a Lambertian surface , I e,Ω follows the Lambert's cosine law I e,Ω = I 0 cos θ . When calculating the radiant intensity emitted by a source, Ω refers to the solid angle into which the light is emitted. When calculating radiance received by a detector, Ω refers to the solid angle subtended by the source as viewed from that detector. Spectral intensity in frequency , denoted I e,Ω,ν , is defined as [ 1 ] where ν is the frequency. Spectral intensity in wavelength , denoted I e,Ω,λ , is defined as [ 1 ] where λ is the wavelength. Radiant intensity is used to characterize the emission of radiation by an antenna : [ 2 ] where Unlike power density, radiant intensity does not depend on distance: because radiant intensity is defined as the power through a solid angle, the decreasing power density over distance due to the inverse-square law is offset by the increase in area with distance.
https://en.wikipedia.org/wiki/Radiant_intensity
In physics , radiation is the emission or transmission of energy in the form of waves or particles through space or a material medium. [ 1 ] [ 2 ] This includes: Radiation is often categorized as either ionizing or non-ionizing depending on the energy of the radiated particles. Ionizing radiation carries more than 10 electron volts (eV) , which is enough to ionize atoms and molecules and break chemical bonds . This is an important distinction due to the large difference in harmfulness to living organisms. A common source of ionizing radiation is radioactive materials that emit α, β, or γ radiation , consisting of helium nuclei , electrons or positrons , and photons , respectively. Other sources include X-rays from medical radiography examinations and muons , mesons , positrons, neutrons and other particles that constitute the secondary cosmic rays that are produced after primary cosmic rays interact with Earth's atmosphere . Gamma rays, X-rays, and the higher energy range of ultraviolet light constitute the ionizing part of the electromagnetic spectrum . The word "ionize" refers to the breaking of one or more electrons away from an atom, an action that requires the relatively high energies that these electromagnetic waves supply. Further down the spectrum, the non-ionizing lower energies of the lower ultraviolet spectrum cannot ionize atoms, but can disrupt the inter-atomic bonds that form molecules, thereby breaking down molecules rather than atoms; a good example of this is sunburn caused by long- wavelength solar ultraviolet. The waves of longer wavelength than UV in visible light, infrared, and microwave frequencies cannot break bonds but can cause vibrations in the bonds which are sensed as heat . Radio wavelengths and below generally are not regarded as harmful to biological systems. These are not sharp delineations of the energies; there is some overlap in the effects of specific frequencies . [ 3 ] The word "radiation" arises from the phenomenon of waves radiating (i.e., traveling outward in all directions) from a source. This aspect leads to a system of measurements and physical units that apply to all types of radiation. Because such radiation expands as it passes through space, and as its energy is conserved (in vacuum), the intensity of all types of radiation from a point source follows an inverse-square law in relation to the distance from its source. Like any ideal law, the inverse-square law approximates a measured radiation intensity to the extent that the source approximates a geometric point. Radiation with sufficiently high energy can ionize atoms; that is to say it can knock electrons off atoms, creating ions. Ionization occurs when an electron is stripped (or "knocked out") from an electron shell of the atom, which leaves the atom with a net positive charge. Because living cells and, more importantly, the DNA in those cells can be damaged by this ionization, exposure to ionizing radiation increases the risk of cancer . Thus "ionizing radiation" is somewhat artificially separated from particle radiation and electromagnetic radiation, simply due to its great potential for biological damage. While an individual cell is made of trillions of atoms, only a small fraction of those will be ionized at low to moderate radiation powers. The probability of ionizing radiation causing cancer is dependent upon the absorbed dose of the radiation and is a function of the damaging tendency of the type of radiation ( equivalent dose ) and the sensitivity of the irradiated organism or tissue ( effective dose ). If the source of the ionizing radiation is a radioactive material or a nuclear process such as fission or fusion , there is particle radiation to consider. Particle radiation is subatomic particles accelerated to relativistic speeds by nuclear reactions. Because of their momenta , they are quite capable of knocking out electrons and ionizing materials, but since most have an electrical charge, they do not have the penetrating power of ionizing radiation. The exception is neutron particles; see below. There are several different kinds of these particles, but the majority are alpha particles , beta particles , neutrons , and protons . Roughly speaking, photons and particles with energies above about 10 electron volts (eV) are ionizing (some authorities use 33 eV, the ionization energy for water). Particle radiation from radioactive material or cosmic rays almost invariably carries enough energy to be ionizing. Most ionizing radiation originates from radioactive materials and space (cosmic rays), and as such is naturally present in the environment, since most rocks and soil have small concentrations of radioactive materials. Since this radiation is invisible and not directly detectable by human senses, instruments such as Geiger counters are usually required to detect its presence. In some cases, it may lead to secondary emission of visible light upon its interaction with matter, as in the case of Cherenkov radiation and radio-luminescence. Ionizing radiation has many practical uses in medicine, research, and construction, but presents a health hazard if used improperly. Exposure to radiation causes damage to living tissue; high doses result in Acute radiation syndrome (ARS), with skin burns, hair loss, internal organ failure, and death, while any dose may result in an increased chance of cancer and genetic damage ; a particular form of cancer, thyroid cancer , often occurs when nuclear weapons and reactors are the radiation source because of the biological proclivities of the radioactive iodine fission product, iodine-131 . [ 4 ] However, calculating the exact risk and chance of cancer forming in cells caused by ionizing radiation is still not well understood, and currently estimates are loosely determined by population-based data from the atomic bombings of Hiroshima and Nagasaki and from follow-up of reactor accidents, such as the Chernobyl disaster . The International Commission on Radiological Protection states that "The Commission is aware of uncertainties and lack of precision of the models and parameter values", "Collective effective dose is not intended as a tool for epidemiological risk assessment, and it is inappropriate to use it in risk projections" and "in particular, the calculation of the number of cancer deaths based on collective effective doses from trivial individual doses should be avoided". [ 5 ] Ultraviolet, of wavelengths from 10 nm to 200 nm, ionizes air molecules, causing it to be strongly absorbed by air and by ozone (O 3 ) in particular. Ionizing UV therefore does not penetrate Earth's atmosphere to a significant degree, and is sometimes referred to as vacuum ultraviolet . Although present in space, this part of the UV spectrum is not of biological importance, because it does not reach living organisms on Earth. There is a zone of the atmosphere in which ozone absorbs some 98% of non-ionizing but dangerous UV-C and UV-B. This ozone layer starts at about 20 miles (32 km) and extends upward. Some of the ultraviolet spectrum that does reach the ground is non-ionizing, but is still biologically hazardous due to the ability of single photons of this energy to cause electronic excitation in biological molecules, and thus damage them by means of unwanted reactions. An example is the formation of pyrimidine dimers in DNA, which begins at wavelengths below 365 nm (3.4 eV), which is well below ionization energy. This property gives the ultraviolet spectrum some of the dangers of ionizing radiation in biological systems without actual ionization occurring. In contrast, visible light and longer-wavelength electromagnetic radiation, such as infrared, microwaves, and radio waves, consists of photons with too little energy to cause damaging molecular excitation, and thus this radiation is far less hazardous per unit of energy. X-rays are electromagnetic waves with a wavelength less than about 10 −9 m (greater than 3 × 10 17 Hz and 1240 eV ). A smaller wavelength corresponds to a higher energy according to the equation E = h c / λ . ( E is Energy; h is the Planck constant; c is the speed of light; λ is wavelength.) When an X-ray photon collides with an atom, the atom may absorb the energy of the photon and boost an electron to a higher orbital level, or if the photon is extremely energetic, it may knock an electron from the atom altogether, causing the atom to ionize. Generally, larger atoms are more likely to absorb an X-ray photon since they have greater energy differences between orbital electrons. The soft tissue in the human body is composed of smaller atoms than the calcium atoms that make up bone, so there is a contrast in the absorption of X-rays. X-ray machines are specifically designed to take advantage of the absorption difference between bone and soft tissue, allowing physicians to examine structure in the human body. X-rays are also totally absorbed by the thickness of the earth's atmosphere, resulting in the prevention of the X-ray output of the sun, smaller in quantity than that of UV but nonetheless powerful, from reaching the surface. Gamma (γ) radiation consists of photons with a wavelength less than 3 × 10 −11 m (greater than 10 19 Hz and 41.4 keV). [ 4 ] Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation. Gamma rays can be stopped by a sufficiently thick or dense layer of material, where the stopping power of the material per given area depends mostly (but not entirely) on the total mass along the path of the radiation, regardless of whether the material is of high or low density. However, as is the case with X-rays, materials with a high atomic number such as lead or depleted uranium add a modest (typically 20% to 30%) amount of stopping power over an equal mass of less dense and lower atomic weight materials (such as water or concrete). The atmosphere absorbs all gamma rays approaching Earth from space. Even air is capable of absorbing gamma rays, halving the energy of such waves by passing through, on the average, 500 ft (150 m). Alpha particles are helium-4 nuclei (two protons and two neutrons). They interact with matter strongly due to their charges and combined mass, and at their usual velocities only penetrate a few centimetres of air, or a few millimetres of low density material (such as the thin mica material which is specially placed in some Geiger counter tubes to allow alpha particles in). This means that alpha particles from ordinary alpha decay do not penetrate the outer layers of dead skin cells and cause no damage to the live tissues below. Some very high energy alpha particles compose about 10% of cosmic rays , and these are capable of penetrating the body and even thin metal plates. However, they are of danger only to astronauts, since they are deflected by the Earth's magnetic field and then stopped by its atmosphere. Alpha radiation is dangerous when alpha-emitting radioisotopes are inhaled or ingested (breathed or swallowed). This brings the radioisotope close enough to sensitive live tissue for the alpha radiation to damage cells. Per unit of energy, alpha particles are at least 20 times more effective at cell-damage than gamma rays and X-rays. See relative biological effectiveness for a discussion of this. Examples of highly poisonous alpha-emitters are all isotopes of radium , radon , and polonium , due to the amount of decay that occur in these short half-life materials. Beta-minus (β − ) radiation consists of an energetic electron. It is more penetrating than alpha radiation but less than gamma. Beta radiation from radioactive decay can be stopped with a few centimetres of plastic or a few millimetres of metal. It occurs when a neutron decays into a proton in a nucleus, releasing the beta particle and an antineutrino . Beta radiation from linac accelerators is far more energetic and penetrating than natural beta radiation. It is sometimes used therapeutically in radiotherapy to treat superficial tumors. Beta-plus (β + ) radiation is the emission of positrons , which are the antimatter form of electrons. When a positron slows to speeds similar to those of electrons in the material, the positron will annihilate an electron, releasing two gamma photons of 511 keV in the process. Those two gamma photons will be traveling in (approximately) opposite directions. The gamma radiation from positron annihilation consists of high energy photons, and is also ionizing. Neutrons are categorized according to their speed/energy. Neutron radiation consists of free neutrons . These neutrons may be emitted during either spontaneous or induced nuclear fission. Neutrons are rare radiation particles; they are produced in large numbers only where chain reaction fission or fusion reactions are active; this happens for about 10 microseconds in a thermonuclear explosion, or continuously inside an operating nuclear reactor; production of the neutrons stops almost immediately in the reactor when it goes non-critical. Neutrons can make other objects, or material, radioactive. This process, called neutron activation , is the primary method used to produce radioactive sources for use in medical, academic, and industrial applications. Even comparatively low speed thermal neutrons cause neutron activation (in fact, they cause it more efficiently). Neutrons do not ionize atoms in the same way that charged particles such as protons and electrons do (by the excitation of an electron), because neutrons have no charge. It is through their absorption by nuclei which then become unstable that they cause ionization. Hence, neutrons are said to be "indirectly ionizing". Even neutrons without significant kinetic energy are indirectly ionizing, and are thus a significant radiation hazard. Not all materials are capable of neutron activation; in water, for example, the most common isotopes of both types atoms present (hydrogen and oxygen) capture neutrons and become heavier but remain stable forms of those atoms. Only the absorption of more than one neutron, a statistically rare occurrence, can activate a hydrogen atom, while oxygen requires two additional absorptions. Thus water is only very weakly capable of activation. The sodium in salt (as in sea water), on the other hand, need only absorb a single neutron to become Na-24, a very intense source of beta decay, with a half-life of 15 hours. In addition, high-energy (high-speed) neutrons have the ability to directly ionize atoms. One mechanism by which high energy neutrons ionize atoms is to strike the nucleus of an atom and knock the atom out of a molecule, leaving one or more electrons behind as the chemical bond is broken. This leads to production of chemical free radicals . In addition, very high energy neutrons can cause ionizing radiation by "neutron spallation" or knockout, wherein neutrons cause emission of high-energy protons from atomic nuclei (especially hydrogen nuclei) on impact. The last process imparts most of the neutron's energy to the proton, much like one billiard ball striking another. The charged protons and other products from such reactions are directly ionizing. High-energy neutrons are very penetrating and can travel great distances in air (hundreds or even thousands of metres) and moderate distances (several metres) in common solids. They typically require hydrogen rich shielding, such as concrete or water, to block them within distances of less than 1 m. A common source of neutron radiation occurs inside a nuclear reactor , where a metres-thick water layer is used as effective shielding. There are two sources of high energy particles entering the Earth's atmosphere from outer space: the sun and deep space. The sun continuously emits particles, primarily free protons, in the solar wind, and occasionally augments the flow hugely with coronal mass ejections (CME). The particles from deep space (inter- and extra-galactic) are much less frequent, but of much higher energies. These particles are also mostly protons, with much of the remainder consisting of helions (alpha particles). A few completely ionized nuclei of heavier elements are present. The origin of these galactic cosmic rays is not yet well understood, but they seem to be remnants of supernovae and especially gamma-ray bursts (GRB), which feature magnetic fields capable of the huge accelerations measured from these particles. They may also be generated by quasars , which are galaxy-wide jet phenomena similar to GRBs but known for their much larger size, and which seem to be a violent part of the universe's early history. The kinetic energy of particles of non-ionizing radiation is too small to produce charged ions when passing through matter. For non-ionizing electromagnetic radiation (see types below), the associated particles (photons) have only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. The effect of non-ionizing forms of radiation on living tissue has only recently been studied. Nevertheless, different biological effects are observed for different types of non-ionizing radiation. [ 4 ] [ 6 ] Even "non-ionizing" radiation is capable of causing thermal-ionization if it deposits enough heat to raise temperatures to ionization energies. These reactions occur at far higher total energies than with ionization radiation, which requires only single particles to cause ionization. A familiar example of thermal ionization is the flame-ionization of a common fire, and the browning reactions in common food items induced by infrared radiation, during broiling-type cooking. The electromagnetic spectrum is the range of all possible electromagnetic radiation frequencies. [ 4 ] The electromagnetic spectrum (usually just spectrum) of an object is the characteristic distribution of electromagnetic radiation emitted by, or absorbed by, that particular object. The non-ionizing portion of electromagnetic radiation consists of electromagnetic waves that (as individual quanta or particles, see photon ) are not energetic enough to detach electrons from atoms or molecules and hence cause their ionization. These include radio waves, microwaves, infrared, and (sometimes) visible light. The lower frequencies of ultraviolet light may cause chemical changes and molecular damage similar to ionization, but is technically not ionizing. The highest frequencies of ultraviolet light, as well as all X-rays and gamma-rays are ionizing. The occurrence of ionization depends on the energy of the individual particles or waves, and not on their number. An intense flood of particles or waves will not cause ionization if these particles or waves do not carry enough energy to be ionizing, unless they raise the temperature of a body to a point high enough to ionize small fractions of atoms or molecules by the process of thermal-ionization (this, however, requires relatively extreme radiation intensities). As noted above, the lower part of the spectrum of ultraviolet, called soft UV, from 3 eV to about 10 eV, is non-ionizing. However, the effects of non-ionizing ultraviolet on chemistry and the damage to biological systems exposed to it (including oxidation, mutation, and cancer) are such that even this part of ultraviolet is often compared with ionizing radiation. Light, or visible light, is a very narrow range of electromagnetic radiation of a wavelength that is visible to the human eye, or 380–750 nm which equates to a frequency range of 790 to 400 THz respectively. [ 4 ] More broadly, physicists use the term "light" to mean electromagnetic radiation of all wavelengths, whether visible or not. Infrared (IR) light is electromagnetic radiation with a wavelength between 0.7 and 300 μm, which corresponds to a frequency range between 430 and 1 THz respectively. IR wavelengths are longer than that of visible light, but shorter than that of microwaves. Infrared may be detected at a distance from the radiating objects by "feel". Infrared sensing snakes can detect and focus infrared by use of a pinhole lens in their heads, called "pits". Bright sunlight provides an irradiance of just over 1 kW/m 2 at sea level. Of this energy, 53% is infrared radiation, 44% is visible light, and 3% is ultraviolet radiation. [ 4 ] Microwaves are electromagnetic waves with wavelengths ranging from as short as 1 mm to as long as 1 m, which equates to a frequency range of 300 MHz to 300 GHz. This broad definition includes both UHF and EHF (millimetre waves), but various sources use different other limits. [ 4 ] In all cases, microwaves include the entire super high frequency band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3 mm). Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Like all other electromagnetic waves, they travel at the speed of light. Naturally occurring radio waves are made by lightning, or by certain astronomical objects. Artificially generated radio waves are used for fixed and mobile radio communication, broadcasting, radar and other navigation systems, satellite communication, computer networks and innumerable other applications. In addition, almost any wire carrying alternating current will radiate some of the energy away as radio waves; these are mostly termed interference. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves may bend at the rate of the curvature of the Earth and may cover a part of the Earth very consistently, shorter waves travel around the world by multiple reflections off the ionosphere and the Earth. Much shorter wavelengths bend or reflect very little and travel along the line of sight. Very low frequency (VLF) refers to a frequency range of 30 Hz to 3 kHz which corresponds to wavelengths of 100 000 to 10 000 m respectively. Since there is not much bandwidth in this range of the radio spectrum, only the very simplest signals can be transmitted, such as for radio navigation. Also known as the myriametre band or myriametre wave as the wavelengths range from 100 km to 10 km (an obsolete metric unit equal to 10 km). Extremely low frequency (ELF) is radiation frequencies from 3 to 30 Hz (10 8 to 10 7 m respectively). In atmosphere science, an alternative definition is usually given, from 3 Hz to 3 kHz. [ 4 ] In the related magnetosphere science, the lower frequency electromagnetic oscillations (pulsations occurring below ~3 Hz) are considered to lie in the ULF range, which is thus also defined differently from the ITU Radio Bands. A massive military ELF antenna in Michigan radiates very slow messages to otherwise unreachable receivers, such as submerged submarines. Thermal radiation is a common synonym for infrared radiation emitted by objects at temperatures often encountered on Earth. Thermal radiation refers not only to the radiation itself, but also the process by which the surface of an object radiates its thermal energy in the form of black-body radiation. Infrared or red radiation from a common household radiator or electric heater is an example of thermal radiation, as is the heat emitted by an operating incandescent light bulb. Thermal radiation is generated when energy from the movement of charged particles within atoms is converted to electromagnetic radiation. As noted above, even low-frequency thermal radiation may cause temperature-ionization whenever it deposits sufficient thermal energy to raise temperatures to a high enough level. Common examples of this are the ionization (plasma) seen in common flames, and the molecular changes caused by the " browning " during food-cooking, which is a chemical process that begins with a large component of ionization. Black-body radiation is an idealized spectrum of radiation emitted by a body that is at a uniform temperature. The shape of the spectrum and the total amount of energy emitted by the body is a function of the absolute temperature of that body. The radiation emitted covers the entire electromagnetic spectrum and the intensity of the radiation (power/unit-area) at a given frequency is described by Planck's law of radiation. For a given temperature of a black-body there is a particular frequency at which the radiation emitted is at its maximum intensity. That maximum radiation frequency moves toward higher frequencies as the temperature of the body increases. The frequency at which the black-body radiation is at maximum is given by Wien's displacement law and is a function of the body's absolute temperature. A black-body is one that emits at any temperature the maximum possible amount of radiation at any given wavelength. A black-body will also absorb the maximum possible incident radiation at any given wavelength. A black-body with a temperature at or below room temperature would thus appear absolutely black, as it would not reflect any incident light nor would it emit enough radiation at visible wavelengths for our eyes to detect. Theoretically, a black-body emits electromagnetic radiation over the entire spectrum from very low frequency radio waves to x-rays, creating a continuum of radiation. The color of a radiating black-body tells the temperature of its radiating surface. It is responsible for the color of stars , which vary from infrared through red ( 2500 K ), to yellow ( 5800 K ), to white and to blue-white ( 15 000 K ) as the peak radiance passes through those points in the visible spectrum. When the peak is below the visible spectrum the body is black, while when it is above the body is blue-white, since all the visible colors are represented from blue decreasing to red. Electromagnetic radiation of wavelengths other than visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to William Herschel , the astronomer . Herschel published his results in 1800 before the Royal Society of London . Herschel, like Ritter, used a prism to refract light from the Sun and detected the infrared (beyond the red part of the spectrum), through an increase in the temperature recorded by a thermometer . In 1801, the German physicist Johann Wilhelm Ritter made the discovery of ultraviolet by noting that the rays from a prism darkened silver chloride preparations more quickly than violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the UV rays were capable of causing chemical reactions. The first radio waves detected were not from a natural source, but were produced deliberately and artificially by the German scientist Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations in the radio frequency range, following formulas suggested by the equations of James Clerk Maxwell . Wilhelm Röntgen discovered and named X-rays . While experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. Within a month, he discovered the main properties of X-rays that we understand to this day. In 1896, Henri Becquerel found that rays emanating from certain minerals penetrated black paper and caused fogging of an unexposed photographic plate. His doctoral student Marie Curie discovered that only certain chemical elements gave off these rays of energy. She named this behavior radioactivity . Alpha rays (alpha particles) and beta rays ( beta particles ) were differentiated by Ernest Rutherford through simple experimentation in 1899. [ 7 ] Rutherford used a generic pitchblende radioactive source and determined that the rays produced by the source had differing penetrations in materials. One type had short penetration (it was stopped by paper) and a positive charge, which Rutherford named alpha rays . The other was more penetrating (able to expose film through paper but not metal) and had a negative charge, and this type Rutherford named beta . This was the radiation that had been first detected by Becquerel from uranium salts. In 1900, the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays . Henri Becquerel himself proved that beta rays are fast electrons, while Rutherford and Thomas Royds proved in 1909 that alpha particles are ionized helium. Rutherford and Edward Andrade proved in 1914 that gamma rays are like X-rays, but with shorter wavelengths. Cosmic ray radiations striking the Earth from outer space were finally definitively recognized and proven to exist in 1912, as the scientist Victor Hess carried an electrometer to various altitudes in a free balloon flight. The nature of these radiations was only gradually understood in later years. The neutron and neutron radiation were discovered by James Chadwick in 1932. A number of other high energy particulate radiations such as positrons , muons , and pions were discovered by cloud chamber examination of cosmic ray reactions shortly thereafter, and others types of particle radiation were produced artificially in particle accelerators , through the last half of the twentieth century. Radiation and radioactive substances are used for diagnosis, treatment, and research. X-rays, for example, pass through muscles and other soft tissue but are stopped by dense materials. This property of X-rays enables doctors to find broken bones and to locate cancers that might be growing in the body. [ 8 ] Doctors also find certain diseases by injecting a radioactive substance and monitoring the radiation given off as the substance moves through the body. [ 9 ] Radiation used for cancer treatment is called ionizing radiation because it forms ions in the cells of the tissues it passes through as it dislodges electrons from atoms. This can kill cells or change genes so the cells cannot grow. Other forms of radiation such as radio waves, microwaves, and light waves are called non-ionizing. They do not have as much energy so they are not able to ionize cells. [ 10 ] All modern communication systems use forms of electromagnetic radiation. Variations in the intensity of the radiation represent changes in the sound, pictures, or other information being transmitted. For example, a human voice can be sent as a radio wave or microwave by making the wave vary to corresponding variations in the voice. Musicians have also experimented with gamma rays sonification, or using nuclear radiation, to produce sound and music. [ 11 ] Researchers use radioactive atoms to determine the age of materials that were once part of a living organism. The age of such materials can be estimated by measuring the amount of radioactive carbon they contain in a process called radiocarbon dating . Similarly, using other radioactive elements, the age of rocks and other geological features (even some man-made objects) can be determined; this is called Radiometric dating . Environmental scientists use radioactive atoms, known as tracer atoms , to identify the pathways taken by pollutants through the environment. Radiation is used to determine the composition of materials in a process called neutron activation analysis . In this process, scientists bombard a sample of a substance with particles called neutrons. Some of the atoms in the sample absorb neutrons and become radioactive. The scientists can identify the elements in the sample by studying the emitted radiation. Radiation is not always dangerous, and not all types of radiation are equally dangerous, contrary to several common medical myths. [ 12 ] [ 13 ] [ 14 ] For example, although bananas contain naturally occurring radioactive isotopes , particularly potassium-40 ( 40 K), which emit ionizing radiation when undergoing radioactive decay, the levels of such radiation are far too low to induce radiation poisoning , and bananas are not a radiation hazard . It would not be physically possible to eat enough bananas to cause radiation poisoning, as the radiation dose from bananas is non-cumulative . [ 15 ] [ 16 ] [ 17 ] Radiation is ubiquitous on Earth, and humans are adapted to survive at the normal low-to-moderate levels of radiation found on Earth's surface. The relationship between dose and toxicity is often non-linear , and many substances that are toxic at very high doses actually have neutral or positive health effects, or are biologically essential, at moderate or low doses. There is some evidence to suggest that this is true for ionizing radiation: normal levels of ionizing radiation may serve to stimulate and regulate the activity of DNA repair mechanisms . High enough levels of any kind of radiation will eventually become lethal, however. [ 18 ] [ 19 ] [ 20 ] Ionizing radiation in certain conditions can damage living organisms, causing cancer or genetic damage. [ 4 ] Non-ionizing radiation in certain conditions also can cause damage to living organisms, such as burns . In 2011, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) released a statement adding radio frequency electromagnetic fields (including microwave and millimetre waves) to their list of things which are possibly carcinogenic to humans. [ 21 ] RWTH Aachen University's EMF-Portal web site presents one of the biggest database about the effects of electromagnetic radiation . As of 12 July 2019 it has 28,547 publications and 6,369 summaries of individual scientific studies on the effects of electromagnetic fields. [ 22 ] On Earth there are different sources of radiation, natural as well as artificial. Natural radiation can come from the Sun, Earth itself, or from cosmic radiation .
https://en.wikipedia.org/wiki/Radiation
Radiation-enhanced diffusion is a phenomenon in materials science like physics and chemistry, wherein the presence of radiation accelerates the diffusion of atoms or ions within a material. The effect arises because of the creation of defects in the crystal lattice, such as vacancies or interstitials, by the radiation. [ 1 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiation-enhanced_diffusion
Radiation chemistry is a subdivision of nuclear chemistry which studies the chemical effects of ionizing radiation on matter. This is quite different from radiochemistry , as no radioactivity needs to be present in the material which is being chemically changed by the radiation. An example is the conversion of water into hydrogen gas and hydrogen peroxide . As ionizing radiation moves through matter its energy is deposited through interactions with the electrons of the absorber. [ 1 ] The result of an interaction between the radiation and the absorbing species is removal of an electron from an atom or molecular bond to form radicals and excited species. The radical species then proceed to react with each other or with other molecules in their vicinity. It is the reactions of the radical species that are responsible for the changes observed following irradiation of a chemical system. [ 2 ] Charged radiation species (α and β particles) interact through Coulombic forces between the charges of the electrons in the absorbing medium and the charged radiation particle. These interactions occur continuously along the path of the incident particle until the kinetic energy of the particle is sufficiently depleted. Uncharged species (γ photons, x-rays) undergo a single event per photon, totally consuming the energy of the photon and leading to the ejection of an electron from a single atom. [ 3 ] Electrons with sufficient energy proceed to interact with the absorbing medium identically to β radiation. An important factor that distinguishes different radiation types from one another is the linear energy transfer ( LET ), which is the rate at which the radiation loses energy with distance traveled through the absorber. Low LET species are usually low mass, either photons or electron mass species ( β particles , positrons ) and interact sparsely along their path through the absorber, leading to isolated regions of reactive radical species. High LET species are usually greater in mass than one electron, [ 4 ] for example α particles, and lose energy rapidly resulting in a cluster of ionization events in close proximity to one another. Consequently, the heavy particle travels a relatively short distance from its origin. Areas containing a high concentration of reactive species following absorption of energy from radiation are referred to as spurs . In a medium irradiated with low LET radiation, the spurs are sparsely distributed across the track and are unable to interact. For high LET radiation, the spurs can overlap, allowing for inter-spur reactions, leading to different yields of products when compared to the same medium irradiated with the same energy of low LET radiation. [ 5 ] A recent area of work has been the destruction of toxic organic compounds by irradiation; [ 6 ] after irradiation, " dioxins " (polychlorodibenzo- p -dioxins) are dechlorinated in the same way as PCBs can be converted to biphenyl and inorganic chloride. This is because the solvated electrons react with the organic compound to form a radical anion, which decomposes by the loss of a chloride anion. If a deoxygenated mixture of PCBs in isopropanol or mineral oil is irradiated with gamma rays , then the PCBs will be dechlorinated to form inorganic chloride and biphenyl . The reaction works best in isopropanol if potassium hydroxide ( caustic potash ) is added. The base deprotonates the hydroxydimethylmethyl radical to be converted into acetone and a solvated electron, as the result the G value (yield for a given energy due to radiation deposited in the system) of chloride can be increased because the radiation now starts a chain reaction, each solvated electron formed by the action of the gamma rays can now convert more than one PCB molecule. [ 7 ] [ 8 ] If oxygen , acetone , nitrous oxide , sulfur hexafluoride or nitrobenzene [ 9 ] is present in the mixture, then the reaction rate is reduced. This work has been done recently in the US, often with used nuclear fuel as the radiation source. [ 10 ] [ 11 ] In addition to the work on the destruction of aryl chlorides, it has been shown that aliphatic chlorine and bromine compounds such as perchloroethylene, [ 12 ] Freon (1,1,2-trichloro-1,2,2-trifluoroethane) and halon-2402 (1,2-dibromo-1,1,2,2-tetrafluoroethane) can be dehalogenated by the action of radiation on alkaline isopropanol solutions. Again a chain reaction has been reported. [ 13 ] In addition to the work on the reduction of organic compounds by irradiation, some work on the radiation induced oxidation of organic compounds has been reported. For instance, the use of radiogenic hydrogen peroxide (formed by irradiation) to remove sulfur from coal has been reported. In this study it was found that the addition of manganese dioxide to the coal increased the rate of sulfur removal. [ 14 ] The degradation of nitrobenzene under both reducing and oxidizing conditions in water has been reported. [ 15 ] In addition to the reduction of organic compounds by the solvated electrons it has been reported that upon irradiation a pertechnetate solution at pH 4.1 is converted to a colloid of technetium dioxide. Irradiation of a solution at pH 1.8 soluble Tc(IV) complexes are formed. Irradiation of a solution at pH 2.7 forms a mixture of the colloid and the soluble Tc(IV) compounds. [ 16 ] Gamma irradiation has been used in the synthesis of nanoparticles of gold on iron oxide (Fe 2 O 3 ). [ 17 ] It has been shown that the irradiation of aqueous solutions of lead compounds leads to the formation of elemental lead. When an inorganic solid such as bentonite and sodium formate are present then the lead is removed from the aqueous solution. [ 18 ] Another key area uses radiation chemistry to modify polymers. Using radiation, it is possible to convert monomers to polymers , to crosslink polymers, and to break polymer chains. [ 19 ] [ 20 ] Both man-made and natural polymers (such as carbohydrates [ 21 ] ) can be processed in this way. Both the harmful effects of radiation upon biological systems (induction of cancer and acute radiation injuries ) and the useful effects of radiotherapy involve the radiation chemistry of water. The vast majority of biological molecules are present in an aqueous medium; when water is exposed to radiation, the water absorbs energy, and as a result forms chemically reactive species that can interact with dissolved substances ( solutes ). Water is ionized to form a solvated electron and H 2 O + , the H 2 O + cation can react with water to form a hydrated proton (H 3 O + ) and a hydroxyl radical (HO . ). Furthermore, the solvated electron can recombine with the H 2 O + cation to form an excited state of the water. This excited state then decomposes to species such as hydroxyl radicals (HO . ), hydrogen atoms (H . ) and oxygen atoms (O . ). Finally, the solvated electron can react with solutes such as solvated protons or oxygen molecules to form hydrogen atoms and dioxygen radical anions, respectively. The fact that oxygen changes the radiation chemistry might be one reason why oxygenated tissues are more sensitive to irradiation than the deoxygenated tissue at the center of a tumor. The free radicals, such as the hydroxyl radical, chemically modify biomolecules such as DNA , leading to damage such as breaks in the DNA strands. Some substances can protect against radiation-induced damage by reacting with the reactive species generated by the irradiation of the water. It is important to note that the reactive species generated by the radiation can take part in following reactions ; this is similar to the idea of the non-electrochemical reactions which follow the electrochemical event which is observed in cyclic voltammetry when a non-reversible event occurs. For example, the SF 5 radical formed by the reaction of solvated electrons and SF 6 undergo further reactions which lead to the formation of hydrogen fluoride and sulfuric acid . [ 22 ] In water, the dimerization reaction of hydroxyl radicals can form hydrogen peroxide , while in saline systems the reaction of the hydroxyl radicals with chloride anions forms hypochlorite anions. The action of radiation upon underground water is responsible for the formation of hydrogen which is converted by bacteria into methane . [ 23 ] [ 24 ] To process materials, either a gamma source or an electron beam can be used. The international type IV ( wet storage ) irradiator is a common design, of which the JS6300 and JS6500 gamma sterilizers (made by 'Nordion International' [2] , which used to trade as 'Atomic Energy of Canada Ltd') are typical examples. [ 25 ] In these irradiation plants, the source is stored in a deep well filled with water when not in use. When the source is required, it is moved by a steel wire to the irradiation room where the products which are to be treated are present; these objects are placed inside boxes which are moved through the room by an automatic mechanism. By moving the boxes from one point to another, the contents are given a uniform dose. After treatment, the product is moved by the automatic mechanism out of the room. The irradiation room has very thick concrete walls (about 3 m thick) to prevent gamma rays from escaping. The source consists of 60 Co rods sealed within two layers of stainless steel. The rods are combined with inert dummy rods to form a rack with a total activity of about 12.6PBq (340kCi). While it is possible to do some types of research using an irradiator much like that used for gamma sterilization, it is common in some areas of science to use a time resolved experiment where a material is subjected to a pulse of radiation (normally electrons from a LINAC ). After the pulse of radiation, the concentration of different substances within the material are measured by emission spectroscopy or Absorption spectroscopy , hence the rates of reactions can be determined. This allows the relative abilities of substances to react with the reactive species generated by the action of radiation on the solvent (commonly water) to be measured. This experiment is known as pulse radiolysis [ 26 ] which is closely related to flash photolysis . In the latter experiment the sample is excited by a pulse of light to examine the decay of the excited states by spectroscopy ; [ 27 ] sometimes the formation of new compounds can be investigated. [ 28 ] Flash photolysis experiments have led to a better understanding of the effects of halogen -containing compounds upon the ozone layer . [ 29 ] The SAW chemosensor [ 30 ] is nonionic and nonspecific. It directly measures the total mass of each chemical compound as it exits the gas chromatography column and condenses on the crystal surface, thus causing a change in the fundamental acoustic frequency of the crystal. Odor concentration is directly measured with this integrating type of detector. Column flux is obtained from a microprocessor that continuously calculates the derivative of the SAW frequency.
https://en.wikipedia.org/wiki/Radiation_chemistry
Radiation damage is the effect of ionizing radiation on physical objects including non-living structural materials. It can be either detrimental or beneficial for materials. Radiobiology is the study of the action of ionizing radiation on living things , including the health effects of radiation in humans . High doses of ionizing radiation can cause damage to living tissue such as radiation burning and harmful mutations such as causing cells to become cancerous , and can lead to health problems such as radiation poisoning . This radiation may take several forms: Radiation may affect materials and devices in deleterious and beneficial ways: Many of the radiation effects on materials are produced by collision cascades and covered by radiation chemistry . Radiation can have harmful effects on solid materials as it can degrade their properties so that they are no longer mechanically sound. This is of special concern as it can greatly affect their ability to perform in nuclear reactors and is the emphasis of radiation material science , which seeks to mitigate this danger. As a result of their usage and exposure to radiation, the effects on metals and concrete are particular areas of study. For metals, exposure to radiation can result in radiation hardening which strengthens the material while subsequently embrittling it (lowers toughness , allowing brittle fracture to occur). This occurs as a result of knocking atoms out of their lattice sites through both the initial interaction as well as a resulting cascade of damage, leading to the creation of defects, dislocations (similar to work hardening and precipitation hardening ). Grain boundary engineering through thermomechanical processing has been shown to mitigate these effects by changing the fracture mode from intergranular (occurring along grain boundaries) to transgranular. This increases the strength of the material, mitigating the embrittling effect of radiation. [ 1 ] Radiation can also lead to segregation and diffusion of atoms within materials, leading to phase segregation and voids as well as enhancing the effects of stress corrosion cracking through changes in both the water chemistry and alloy microstructure. [ 2 ] [ 3 ] As concrete is used extensively in the construction of nuclear power plants, where it provides structure as well as containing radiation, the effect of radiation on it is also of major interest. During its lifetime, concrete will change properties naturally due to its normal aging process, however nuclear exposure will lead to a loss of mechanical properties due to swelling of the concrete aggregates, and thus damaging the bulk material. For instance, the biological shield of the reactor is frequently composed of Portland cement , where dense aggregates are added in order to decrease the radiation flux through the shield. These aggregates can swell and make the shield mechanically unsound. Numerous studies have shown decreases in both compressive and tensile strength as well as elastic modulus of concrete at around a dosage of around 10 19 neutrons per square centimeter. [ 4 ] These trends were also shown to exist in reinforced concrete , a composite of both concrete and steel. [ 5 ] The knowledge gained from current analyses of materials in fission reactors in regards to the effects of temperature, irradiation dosage, materials compositions, and surface treatments will be helpful in the design of future fission reactors as well as the development of fusion reactors . [ 6 ] Solids subject to radiation are constantly being bombarded with high energy particles. The interaction between particles, and atoms in the lattice of the reactor materials causes displacement in the atoms. [ 7 ] Over the course of sustained bombardment, some of the atoms do not come to rest at lattice sites, which results in the creation of defects . These defects cause changes in the microstructure of the material, and ultimately result in a number of radiation effects. The probability of an interaction between two atoms is dependent on the thermal neutron cross section (measured in barn ). Given a macroscopic cross section of Σ = σ ρ A {\displaystyle \Sigma =\sigma \rho _{A}} (where σ {\displaystyle \sigma } is the microscopic cross section, and ρ A {\displaystyle \rho _{A}} is the density of atoms in the target), and a reaction rate of R = Φ Σ = Φ σ ρ A {\displaystyle R=\Phi \Sigma =\Phi \sigma \rho _{A}} (where Φ {\displaystyle \Phi } is the beam flux), the probability of interaction becomes P d x = N j σ ( E i ) d x = Σ d x {\textstyle P\,dx=N_{j}\sigma (E_{i})\,dx=\Sigma \,dx} . [ clarification needed ] Listed below are the cross sections of common atoms or alloys. Thermal Neutron Cross Sections (Barn) [ 8 ] Microstructural evolution is driven in the material by the accumulation of defects over a period of sustained radiation. This accumulation is limited by defect recombination, by clustering of defects, and by the annihilation of defects at sinks. Defects must thermally migrate to sinks, and in doing so often recombine, or arrive at sinks to recombine. In most cases, D rad = D v C v + D i C i >> D therm , that is to say, the motion of interstitials and vacancies throughout the lattice structure of a material as a result of radiation often outweighs the thermal diffusion of the same material. One consequence of a flux of vacancies towards sinks is a corresponding flux of atoms away from the sink. If vacancies are not annihilated or recombined before collecting at sinks, they will form voids. At sufficiently high temperature, dependent on the material, these voids can fill with gases from the decomposition of the alloy, leading to swelling in the material. [ 9 ] This is a tremendous issue for pressure sensitive or constrained materials that are under constant radiation bombardment, like pressurized water reactors . In many cases, the radiation flux is non-stoichiometric, which causes segregation within the alloy. This non-stoichiometric flux can result in significant change in local composition near grain boundaries, [ 10 ] where the movement of atoms and dislocations is impeded. When this flux continues, solute enrichment at sinks can result in the precipitation of new phases. Radiation hardening is the strengthening of the material in question by the introduction of defect clusters, impurity-defect cluster complexes, dislocation loops, dislocation lines, voids, bubbles and precipitates. For pressure vessels, the loss in ductility that occurs as a result of the increase in hardness is a particular concern. Radiation embrittlement results in a reduction of the energy to fracture, due to a reduction in strain hardening (as hardening is already occurring during irradiation). This is motivated for very similar reasons to those that cause radiation hardening; development of defect clusters, dislocations, voids, and precipitates. Variations in these parameters make the exact amount of embrittlement difficult to predict, [ 11 ] but the generalized values for the measurement show predictable consistency. Thermal creep in irradiated materials is negligible, by comparison to the irradiation creep, which can exceed 10 −6 sec −1 . [ 12 ] The mechanism is not enhanced diffusivities, as would be intuitive from the elevated temperature, but rather interaction between the stress and the developing microstructure. Stress induces the nucleation of loops, and causes preferential absorption of interstitials at dislocations, which results in swelling. [ 13 ] Swelling, in combination with the embrittlement and hardening, can have disastrous effects on any nuclear material under substantial pressure. Growth in irradiated materials is caused by Diffusion Anisotropy Difference (DAD). This phenomenon frequently occurs in zirconium, graphite, and magnesium because of natural properties. Thermal and electrical conductivity rely on the transport of energy through the electrons and the lattice of a material. Defects in the lattice and substitution of atoms via transmutation disturb these pathways, leading to a reduction in both types of conduction by radiation damage. The magnitude of reduction depends on the dominant type of conductivity (electronic or Wiedemann–Franz law , phononic) in the material and the details of the radiation damage and is therefore still hard to predict. Radiation damage can affect polymers that are found in nuclear reactors, medical devices, electronic packaging, and aerospace parts, as well as polymers that undergo sterilization or irradiation for use in food and pharmaceutical industries. [ 14 ] [ 15 ] Ionizing radiation can also be used to intentionally strengthen and modify the properties of polymers. [ 16 ] Research in this area has focused on the three most common sources of radiation used for these applications, including gamma, electron beam, and x-ray radiation. [ 17 ] The mechanisms of radiation damage are different for polymers and metals, since dislocations and grain boundaries do not have real significance in a polymer. Instead, polymers deform via the movement and rearrangement of chains, which interact through Van der Waals forces and hydrogen bonding. In the presence of high energy, such as ionizing radiation, the covalent bonds that connect the polymer chains themselves can overcome their forces of attraction to form a pair of free radicals . These radicals then participate in a number of polymerization reactions that fall under the classification of radiation chemistry . Crosslinking describes the process through which carbon-centered radicals on different chains combine to form a network of crosslinks . In contrast, chain scission occurs when a carbon-centered radical on the polymer backbone reacts with another free radical, typically from oxygen in the atmosphere, causing a break in the main chain. Free radicals can also undergo reactions that graft new functional groups onto the backbone, or laminate two polymer sheets without an adhesive. [ 17 ] There is contradictory information about the expected effects of ionizing radiation for most polymers, since the conditions of radiation are so influential. For example, dose rate determines how fast free radicals are formed and whether they are able to diffuse through the material to recombine, or participate in chemical reactions. [ 18 ] The ratio of crosslinking to chain scission is also affected by temperature, environment, presence of oxygen versus inert gases, radiation source (changing the penetration depth), and whether the polymer has been dissolved in an aqueous solution. [ 15 ] Crosslinking and chain scission have diverging effects on mechanical properties. Irradiated polymers typically undergo both types of reactions simultaneously, but not necessarily to the same extent. [ 19 ] Crosslinks strengthen the polymer by preventing chain sliding, effectively leading to thermoset behavior. Crosslinks and branching lead to higher molecular weight and polydispersity. [ 18 ] Thus, these polymers will generally have increased stiffness, tensile strength, and yield strength, [ 20 ] and decreased solubility. [ 14 ] Polyethylene is well known to experience improved mechanical properties as a result of crosslinking, including increased tensile strength and decreased elongation at break. [ 16 ] Thus, it has “several advantageous applications in areas as diverse as rock bolts for mining, reinforcement of concrete, manufacture of light weight high strength ropes and high performance fabrics.” [ 14 ] In contrast, chain scission reactions will weaken the material by decreasing the average molecular weight of the chains, such that tensile and flexural strength decrease and solubility increases. [ 14 ] Chain scission occurs primarily in the amorphous regions of the polymer. It can increase crystallinity in these regions by making it easier for the short chains to reassemble. Thus, it has been observed that crystallinity increases with dose, [ 18 ] leading to a more brittle material on the macroscale. In addition, “gaseous products, such as CO 2 , may be trapped in the polymer, and this can lead to subsequent crazing and cracking due to accumulated local stresses." [ 14 ] An example of this phenomenon is 3D printed materials, which are often porous as a result of their printing configuration. [ 20 ] Oxygen can diffuse into the pores and react with the surviving free radicals, leading to embrittlement . [ 20 ] Some materials continue to weaken through aging, as the remaining free radicals react. [ 15 ] The resistance of these polymers to radiation damage can be improved by grafting or copolymerizing aromatic groups, which enhance stability and decrease reactivity, and by adding antioxidants and nanomaterials , which act as free radical scavengers. [ 19 ] In addition, higher molecular weight polymers will be more resistant to radiation. [ 18 ] Exposure to radiation causes chemical changes in gases. The least susceptible to damage are noble gases , where the major concern is the nuclear transmutation with follow-up chemical reactions of the nuclear reaction products. High-intensity ionizing radiation in air can produce a visible ionized air glow of telltale bluish-purplish color. The glow can be observed e.g. during criticality accidents , around mushroom clouds shortly after a nuclear explosion , or inside of a damaged nuclear reactor like during the Chernobyl disaster . Significant amounts of ozone can be produced. Even small amounts of ozone can cause ozone cracking in many polymers over time, in addition to the damage by the radiation itself. In some gaseous ionisation detectors , radiation damage to gases plays an important role in the device's ageing, especially in devices exposed for long periods to high intensity radiation, e.g. detectors for the Large Hadron Collider or the Geiger–Müller tube Ionization processes require energy above 10 eV, while splitting covalent bonds in molecules and generating free radicals requires only 3-4 eV. The electrical discharges initiated by the ionization events by the particles result in plasma populated by large amount of free radicals. The highly reactive free radicals can recombine back to original molecules, or initiate a chain of free-radical polymerization reactions with other molecules, yielding compounds with increasing molecular weight . These high molecular weight compounds then precipitate from gaseous phase, forming conductive or non-conductive deposits on the electrodes and insulating surfaces of the detector and distorting its response. Gases containing hydrocarbon quenchers, e.g. argon – methane , are typically sensitive to aging by polymerization; addition of oxygen tends to lower the aging rates. Trace amounts of silicone oils , present from outgassing of silicone elastomers and especially from traces of silicone lubricants , tend to decompose and form deposits of silicon crystals on the surfaces. Gaseous mixtures of argon (or xenon ) with carbon dioxide and optionally also with 2-3% of oxygen are highly tolerant to high radiation fluxes. The oxygen is added as noble gas with carbon dioxide has too high transparency for high-energy photons ; ozone formed from the oxygen is a strong absorber of ultraviolet photons. Carbon tetrafluoride can be used as a component of the gas for high-rate detectors; the fluorine radicals produced during the operation however limit the choice of materials for the chambers and electrodes (e.g. gold electrodes are required, as the fluorine radicals attack metals, forming fluorides ). Addition of carbon tetrafluoride can however eliminate the silicon deposits. Presence of hydrocarbons with carbon tetrafluoride leads to polymerization. A mixture of argon, carbon tetrafluoride, and carbon dioxide shows low aging in high hadron flux. [ 21 ] Like gases, liquids lack fixed internal structure; the effects of radiation is therefore mainly limited to radiolysis , altering the chemical composition of the liquids. As with gases, one of the primary mechanisms is formation of free radicals . All liquids are subject to radiation damage, with few exotic exceptions; e.g. molten sodium, where there are no chemical bonds to be disrupted, and liquid hydrogen fluoride , which produces gaseous hydrogen and fluorine, which spontaneously react back to hydrogen fluoride. Water subjected to ionizing radiation forms free radicals of hydrogen and hydroxyl , which can recombine to form gaseous hydrogen , oxygen , hydrogen peroxide , hydroxyl radicals , and peroxide radicals. In living organisms, which are composed mostly of water, majority of the damage is caused by the reactive oxygen species , free radicals produced from water. The free radicals attack the biomolecules forming structures within the cells , causing oxidative stress (a cumulative damage which may be significant enough to cause the cell death, or may cause DNA damage possibly leading to cancer ). In cooling systems of nuclear reactors, the formation of free oxygen would promote corrosion and is counteracted by addition of hydrogen to the cooling water. [ 22 ] The hydrogen is not consumed as for each molecule reacting with oxygen one molecule is liberated by radiolysis of water; the excess hydrogen just serves to shift the reaction equilibriums by providing the initial hydrogen radicals. The reducing environment in pressurized water reactors is less prone to buildup of oxidative species. The chemistry of boiling water reactor coolant is more complex, as the environment can be oxidizing. Most of the radiolytic activity occurs in the core of the reactor where the neutron flux is highest; the bulk of energy is deposited in water from fast neutrons and gamma radiation, the contribution of thermal neutrons is much lower. In air-free water, the concentration of hydrogen, oxygen, and hydrogen peroxide reaches steady state at about 200 Gy of radiation. In presence of dissolved oxygen, the reactions continue until the oxygen is consumed and the equilibrium is shifted. Neutron activation of water leads to buildup of low concentrations of nitrogen species; due to the oxidizing effects of the reactive oxygen species, these tend to be present in the form of nitrate anions. In reducing environments, ammonia may be formed. Ammonia ions may be however also subsequently oxidized to nitrates. Other species present in the coolant water are the oxidized corrosion products (e.g. chromates ) and fission products (e.g. pertechnetate and periodate anions, uranyl and neptunyl cations). [ 23 ] Absorption of neutrons in hydrogen nuclei leads to buildup of deuterium and tritium in the water. Behavior of supercritical water , important for the supercritical water reactors , differs from the radiochemical behavior of liquid water and steam and is currently under investigation. [ 24 ] The magnitude of the effects of radiation on water is dependent on the type and energy of the radiation, namely its linear energy transfer . A gas-free water subjected to low-LET gamma rays yields almost no radiolysis products and sustains an equilibrium with their low concentration. High-LET alpha radiation produces larger amounts of radiolysis products. In presence of dissolved oxygen, radiolysis always occurs. Dissolved hydrogen completely suppresses radiolysis by low-LET radiation while radiolysis still occurs with The presence of reactive oxygen species has strongly disruptive effect on dissolved organic chemicals. This is exploited in groundwater remediation by electron beam treatment. [ 25 ] Two main approaches to reduce radiation damage are reducing the amount of energy deposited in the sensitive material (e.g. by shielding, distance from the source, or spatial orientation), or modification of the material to be less sensitive to radiation damage (e.g. by adding antioxidants, stabilizers, or choosing a more suitable material). In addition to the electronic device hardening mentioned above, some degree of protection may be obtained by shielding, usually with the interposition of high density materials (particularly lead, where space is critical, or concrete where space is available) between the radiation source and areas to be protected. For biological effects of substances such as radioactive iodine the ingestion of non-radioactive isotopes may substantially reduce the biological uptake of the radioactive form, and chelation therapy may be applied to accelerate the removal of radioactive materials formed from heavy metals from the body by natural processes. Solid countermeasures to radiation damage consist of three approaches. Firstly, saturating the matrix with oversized solutes. This acts to trap the swelling that occurs as a result of the creep and dislocation motion. They also act to help prevent diffusion, which restricts the ability of the material to undergo radiation induced segregation. [ 26 ] Secondly, dispersing an oxide inside the matrix of the material. Dispersed oxide helps to prevent creep, and to mitigate swelling and reduce radiation induced segregation as well, by preventing dislocation motion and the formation and motion of interstitials. [ 27 ] Finally, by engineering grain boundaries to be as small as possible, dislocation motion can be impeded, which prevents the embrittlement and hardening that result in material failure. [ 28 ] Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis . Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns , and/or rapid fatality through acute radiation syndrome . Controlled doses are used for medical imaging and radiotherapy . Most adverse health effects of radiation exposure may be grouped in two general categories:
https://en.wikipedia.org/wiki/Radiation_damage
The following Radiological protection instruments can be used to detect and measure ionizing radiation :
https://en.wikipedia.org/wiki/Radiation_detection
Radiation dose reconstruction refers to the process of estimating radiation doses that were received by individuals or populations in the past as a result of particular exposure situations of concern. [ 1 ] The basic principle of radiation dose reconstruction is to characterize the radiation environment to which individuals have been exposed using available information. In cases where radiation exposures can not be fully characterized based on available data, default values based on reasonable scientific assumptions can be used as substitutes. The extent to which the default values are used depends on the purpose of the reconstruction(s) being undertaken. The methods and techniques used in dose reconstructions have been growing and evolving rapidly. It wasn’t until the late 1970s that dose reconstruction emerged as a scientific discipline [ 2 ] and it has been used in practice in the United States for the last two decades. [ 3 ] The scientific methods and practices used to complete dose reconstructions are often based on the standards published by international consensus organizations such as the International Commission on Radiological Protection . [ 2 ] When conducted properly, dose reconstruction is a scientifically valid process for estimating radiation dose received by an individual or group of individuals. It is commonly used in occupational epidemiological studies to determine the amount of radiation workers may have received as part of their employment. For these types of studies, dose reconstruction is similar to the process of estimating how much radiation current workers receive, for example at a nuclear facility, except dose reconstructions evaluate past exposures. The terms historical and retrospective often are used to describe a dose reconstruction. [ 3 ] Dose estimation is the term sometimes used to describe the process used to determine radiation exposures to current populations or individuals. Dose reconstruction methods have also commonly been applied in environmental settings to assess radionuclide releases into the environment from nuclear sites. One such environmentally focused study was published in 1983 by the U.S. Nuclear Regulatory Commission entitled Radiological Risk Assessment: A Textbook on Environmental Dose Analysis. This book was updated with major revisions in 2008 and it details the steps of radiological assessments, which uses similar methods and techniques as a dose reconstruction. [ 4 ] Dose reconstruction methods are not limited to just measuring exposures to radiation. Dose reconstruction principles can be used to reconstruct exposures to other hazardous materials and to determine the health effects of those toxins to populations or individuals. The dose reconstruction process has several basic elements, which have been identified as follows: Summary of Basic Elements of Dose Reconstruction Process as found in A Review of the Dose Reconstruction Program of the Defense Threat Reduction Agency [ 1 ] Radiation dose reconstruction methods are used to a large extent in occupational, environmental, and medical epidemiological research studies. The Centers for Disease Control and Prevention (CDC) has been involved in several dose reconstruction projects. Several CDC agencies are involved in dose reconstruction projects: the Agency for Toxic Substances and Disease Registry (ATSDR), the National Center for Environmental Health (NCEH), and the National Institute for Occupational Safety and Health (NIOSH). The Agency for Toxic Substances and Disease Registry (ATSDR) conducts dose reconstructions in relation to work done at Superfund sites . ATSDR defines exposure-dose reconstruction as an approach that uses computational models and other approximation techniques to estimate cumulative amounts of hazardous substances internalized by individuals presumed to be or who are actually at risk from contact with substances associated with hazardous waste sites. In March 1993, ATSDR established the Exposure-Dose Reconstruction Program (EDRP). EDRP represents a coordinated, comprehensive effort to develop sensitive, integrated, science-based methods for improving health scientists’ and assessors’ access to current and historical exposure-dose characterization. EDRP was created to confront the challenge that faced health scientists and assessors who have not always had access to information-especially historical information regarding an individual’s direct measure of exposure to and dose of chemicals associated with hazardous waste sites. [ 5 ] The National Center for Environmental Health (NCEH) coordinates program and conducts environmental epidemiological health studies using dose reconstruction principles. NCEH has undertaken a series of studies to assess the possible health consequences of off-site emissions of radioactive materials from DOE-managed nuclear facilities in the United States. [ 6 ] Dose reconstruction as used by NCEH is defined as the process of estimating doses to the public from past releases to the environment of radionuclides or chemicals. These doses form the basis for estimating health risks. Past exposures are the focus of the NCEH studies. [ 6 ] The National Institute for Occupational Safety and Health (NIOSH) completes dose reconstructions as a component of ongoing worker health studies. The NIOSH Occupational Energy Research Program’s mission is to conduct relevant, unbiased research to identify and quantify health effects among workers exposed to ionizing radiation and other agents; to develop and refine exposure assessment methods; to effectively communicate study results to workers, scientists, and the public; to contribute scientific information for the prevention of occupational injury and illness; and to adhere to the highest standards of professional ethics and concern for workers’ health, safety and privacy. [ 7 ] One of the largest mass applications of individual dose reconstruction principles is also being undertaken by NIOSH. NIOSH is the designated agency responsible for completing radiation dose reconstructions for individuals under the Energy Employees Occupational Illness Compensation Program of 2000 (the Act). Under the Act, individuals, and in some cases their survivors, are eligible for compensation for specified illnesses they received from occupational exposures to beryllium, asbestos, toxic materials, and radiation if they worked at a covered Department of Energy (DOE) facility or a facility that contracted with DOE to produce nuclear weapons or components, known as Atomic Weapons Employers (AWE). The program is administered by the Department of Labor . NIOSH’s responsibility under the Act is to determine the probability that an individual’s cancer was a result of their occupational radiation exposure at a DOE or AWE facility. This probability is determined by DOL and is based on the radiation dose reconstruction completed by NIOSH. The dose reconstructions are completed by individuals trained in the field of health physics . The science behind the NIOSH dose reconstruction process has been published in the peer-reviewed professional journal Health Physics: The Radiation Safety Journal in July 2008. This edition of the Journal was dedicated entirely to the NIOSH Radiation Dose Reconstruction Program. The Department of Veterans Affairs uses dose reconstructions to process claims under the Nuclear Test Personnel Review (NTPR) program. The NTPR is a Department of Defense program that works to confirm veteran participation in U.S. atmospheric nuclear tests from 1945 to 1962, and the occupation forces of Hiroshima and Nagasaki, Japan. If the veteran is a confirmed participant of these events, NTPR may provide either an actual or estimated radiation dose received by the veteran. The Defense Threat Reduction Agency completes the dose reconstructions for the NTPR program.
https://en.wikipedia.org/wiki/Radiation_dose_reconstruction
Radiation effect is the physical and chemical property changes of materials induced by radiation . One such phenomena is acute radiation syndrome , caused by exposure to ionizing radiation . [ 1 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiation_effect
When optical fibers are exposed to ionizing radiation such as energetic electrons , protons , neutrons , X-rays , Ƴ-radiation, etc., they undergo 'damage'. [ 1 ] [ 2 ] The term 'damage' primarily refers to added optical absorption, resulting in loss of the propagating optical signal leading to decreased power at the output end, which could lead to premature failure of the component and or system. In the professional literature, the effect is often named Radiation Induced Attenuation (RIA), or Radiation-induced darkening. The loss of power or 'darkening' occurs because the chemical bonds forming the optical fiber core are disrupted by the impinging high energy resulting in the appearance of new electronic transition states giving rise to additional absorption in the wavelength regions of interest. The radiation induced defects tend to absorb more at shorter wavelengths, [ 3 ] and hence radiation-damaged glass appears to yellow. Once radiation source is removed, the fiber can recover some of its original transparency [ 3 ] (a process called recovery or "self-healing"), which occurs due to thermal annealing or photobleaching of the defects. [ 2 ] The extent of damage is governed by the balance between defect generation (excess attenuation ) on one hand and defect annihilation (recovery) on the other hand. [ 2 ] If the dose rate is low, an equilibrium state (between attenuation and recovery) is reached with some degree of darkening. However, if the dose rate is high, the utility of fiber depends on the overall induced attenuation and the recovery time. Understanding these radiation induced effects is important particularly for space based applications where optical fibers are being considered for use in increasing number of applications. [ 3 ] [ 4 ] Intrinsic defects are present in the matrix of even a single component glass material like pure silica . These include per-oxy linkages, POL (≡Si-O-O-Si≡) which are oxygen interstitials, and oxygen deficient centers, ODC (≡Si-Si≡) which are oxygen vacancies. [ 4 ] When exposed to ionizing radiation, these sites trap charge (typically holes ) to form per-oxy radicals, POR (≡Si-O-O.) and E’ centers (≡Si.), respectively. These trapped charges interact with the electric field of the electromagnetic wave, causing absorption. In addition, rapidly cooled silica has strained ≡Si-O-Si≡ bonds, which are cleaved upon radiation to form non-bridging oxygen hole centers (NBOHC) depicted as ≡Si-O. and E’ centers by trapping holes and electrons, respectively. [ 5 ] When the glass contains a second network former with the same valence as silicon such as germanium, the difference in the electronegativities favors the dopant as a hole trap. Hence radiation damage occurs in doped silica glass. To improve radiation resistance of pure silica core fibers, it is necessary to minimize the number density of these intrinsic defects. Minimization of defects is achieved not only by reducing the incorporation of impurities in glass but also by controlling the input gas composition, optimizing the thermal history of glass at all stages of fiber manufacturing and optimizing the stress in the fiber core. Other strategies include incorporation of dopants (such as fluorine) in the core that minimize formation of defect centers discussed above. [ 6 ] All optical fibers undergo some darkening depending on a number of factors that include: ionization type, optical fiber core glass composition, operating wavelength, dose rate, total accumulated dose, temperature and power propagating through the core. [ 1 ] Since attenuation is composition dependent, it is observed that fibers having pure silica cores and fluorine down doped claddings are amongst the most radiation hard fibers. The presence of dopants in the core such as germanium , phosphorus , boron , aluminum , erbium , ytterbium , thulium , holmium etc. compromises the radiation hardness of optical fibers. To minimize damage consequences, it is better to use a pure silica core fiber at higher operating wavelength, lower dose rate, lower total accumulated dose, higher temperature (accelerated recovery) and higher signal power (photo-bleaching). In addition to these intrinsic steps, external engineering may be required to shield the fiber from the effects of radiation. [ 4 ] Germanium-doped core fibers can be radiation hard even at high concentrations of germanium. Such fibers reach saturation, anneal well at higher temperatures and are also responsive to photo-bleaching. In case of phosphorus-doped core fibers, attenuation increases linearly with increasing phosphorus content and these fibers do not reach saturation. Recovery is very difficult even at higher temperatures. Boron, aluminum and all the rare-earth dopants significantly affect fiber loss. [ 7 ] Radiation performances of various SM, MM and PM fibers manufactured by different vendors that were tested in wide range of radiation environments have been compiled. [ 7 ]
https://en.wikipedia.org/wiki/Radiation_effects_on_optical_fibers
In antenna theory, radiation efficiency is a measure of how well a radio antenna converts the radio-frequency power accepted at its terminals into radiated power. Likewise, in a receiving antenna it describes the proportion of the radio wave's power intercepted by the antenna which is actually delivered as an electrical signal. It is not to be confused with antenna efficiency , which applies to aperture antennas such as a parabolic reflector or phased array , or antenna/aperture illumination efficiency , which relates the maximum directivity of an antenna/aperture to its standard directivity . [ 1 ] Radiation efficiency is defined as "The ratio of the total power radiated by an antenna to the net power accepted by the antenna from the connected transmitter." [ 1 ] It is sometimes expressed as a percentage (less than 100), and is frequency dependent. It can also be described in decibels . The gain of an antenna is the directivity multiplied by the radiation efficiency. [ 2 ] Thus, we have where G {\displaystyle G} is the gain of the antenna in a specified direction, e R {\displaystyle e_{R}} is the radiation efficiency, and D {\displaystyle D} is the directivity of the antenna in the specified direction. For wire antennas which have a defined radiation resistance the radiation efficiency is the ratio of the radiation resistance to the total resistance of the antenna including ground loss (see below) and conductor resistance. [ 3 ] [ 4 ] In practical cases the resistive loss in any tuning and/or matching network is often included, although network loss is strictly not a property of the antenna. For other types of antenna the radiation efficiency is less easy to calculate and is usually determined by measurements. In the case of an antenna or antenna array having multiple ports, the radiation efficiency depends on the excitation. More precisely, the radiation efficiency depends on the relative phases and the relative amplitudes of the signals applied to the different ports. [ 5 ] This dependence is always present, but it is easier to interpret in the case where the interactions between the ports are sufficiently small. These interactions may be large in many actual configurations, for instance in an antenna array built in a mobile phone to provide spatial diversity and/or spatial multiplexing. [ 6 ] In this context, it is possible to define an efficiency metric as the minimum radiation efficiency for all possible excitations, denoted by e R M I N {\displaystyle e_{R\,MIN}} , which is related to the radiation efficiency figure given by F R E = 1 − e R M I N {\displaystyle F_{RE}={\sqrt {1-e_{R\,MIN}}}} . [ 5 ] Another interesting efficiency metric is the maximum radiation efficiency for all possible excitations, denoted by e R M A X {\displaystyle e_{R\,MAX}} . It is possible to consider that using e R M I N {\displaystyle e_{R\,MIN}} as design parameter is particularly relevant to a multiport antenna array intended for MIMO transmission with spatial multiplexing, and that using e R M A X {\displaystyle e_{R\,MAX}} as design parameter is particularly relevant to a multiport antenna array intended for beamforming in a single direction or over a small solid angle. [ 7 ] Measurements of the radiation efficiency are difficult. Classical techniques include the ″Wheeler method″ (also referred to as ″Wheeler cap method″) and the ″Q factor method″. [ 8 ] [ 9 ] The Wheeler method uses two impedance measurements, one of which with the antenna located in a metallic box (the cap). Unfortunately, the presence of the cap is likely to significantly modify the current distribution on the antenna, so that the resulting accuracy is difficult to determine. The Q factor method does not use a metallic enclosure, but the method is based on the assumption that the Q factor of an ideal antenna is known, the ideal antenna being identical to the actual antenna except that the conductors have perfect conductivity and any dielectrics have zero loss. Thus, the Q factor method is only semi-experimental, because it relies on a theoretical computation using an assumed geometry of the actual antenna. Its accuracy is also difficult to determine. Other radiation efficiency measurement techniques include: the pattern integration method, which requires gain measurements over many directions and two polarizations; and reverberation chamber techniques, which utilize a mode-stirred reverberation chamber. [ 8 ] [ 10 ] The loss of radio-frequency power to heat can be subdivided many different ways, depending on the number of significantly lossy objects electrically coupled to the antenna, and on the level of detail desired. Typically the simplest is to consider two types of loss: ohmic loss and ground loss . [ a ] When discussed as distinct from ground loss , the term ohmic loss refers to the heat-producing resistance to the flow of radio current in the conductors of the antenna, their electrical connections, and possibly loss in the antenna's feed cable. Because of the skin effect , resistance to radio-frequency current is generally much higher than direct current resistance. For vertical monopoles and other antennas placed near the ground, ground loss occurs due to the electrical resistance encountered by radio-frequency fields and currents passing through the soil in the vicinity of the antenna, as well as ohmic resistance in metal objects in the antenna's surroundings (such as its mast or stalk), and ohmic resistance in its ground plane / counterpoise, and in electrical and mechanical bonding connections. When considering antennas that are mounted a few wavelengths above the earth on a non-conducting, radio-transparent mast, ground losses are small enough compared to conductor losses that they can be ignored. [ b ]
https://en.wikipedia.org/wiki/Radiation_efficiency
Radiation exposure may refer to:
https://en.wikipedia.org/wiki/Radiation_exposure_(disambiguation)
Radiation hormesis is the hypothesis that low doses of ionizing radiation (within the region of and just above natural background levels ) are beneficial, stimulating the activation of repair mechanisms that protect against disease , that are not activated in absence of ionizing radiation. The reserve repair mechanisms are hypothesized to be sufficiently effective when stimulated as to not only cancel the detrimental effects of ionizing radiation but also inhibit disease not related to radiation exposure (see hormesis ). [ 1 ] [ 2 ] [ 3 ] [ 4 ] It has been a mainstream concept since at least 2009. [ 5 ] [ unreliable source? ] While the effects of high and acute doses of ionising radiation are easily observed and understood in humans ( e.g. Japanese atomic bomb survivors), the effects of low-level radiation are very difficult to observe and highly controversial. This is because the baseline cancer rate is already very high and the risk of developing cancer fluctuates 40% because of individual life style and environmental effects, [ 6 ] [ 7 ] obscuring the subtle effects of low-level radiation. An acute effective dose of 100 millisieverts may increase cancer risk by ~0.8%. However, children are particularly sensitive to radioactivity, with childhood leukemias and other cancers increasing even within natural and man-made background radiation levels (under 4 mSv cumulative with 1 mSv being an average annual dose from terrestrial and cosmic radiation, excluding radon which primarily doses the lung). [ 8 ] [ 9 ] There is limited evidence that exposures around this dose level will cause negative subclinical health impacts to neural development. [ 10 ] Students born in regions of higher Chernobyl fallout performed worse in secondary school, particularly in mathematics. "Damage is accentuated within families (i.e., siblings comparison) and among children born to parents with low education..." who often don't have the resources to overcome this additional health challenge. [ 11 ] Hormesis remains largely unknown to the public. Government and regulatory bodies disagree on the existence of radiation hormesis and research points to the "severe problems and limitations" with the use of hormesis in general as the "principal dose-response default assumption in a risk assessment process charged with ensuring public health protection." [ 12 ] Quoting results from a literature database research, the Académie des Sciences – Académie nationale de Médecine ( French Academy of Sciences – National Academy of Medicine ) stated in their 2005 report concerning the effects of low-level radiation that many laboratory studies have observed radiation hormesis. [ 13 ] [ 14 ] However, they cautioned that it is not yet known if radiation hormesis occurs outside the laboratory, or in humans. [ 15 ] Reports by the United States National Research Council and the National Council on Radiation Protection and Measurements and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) argue [ 16 ] that there is no evidence for hormesis in humans and in the case of the National Research Council hormesis is outright rejected as a possibility. [ 17 ] Therefore, estimating linear no-threshold model (LNT) continues to be the model generally used by regulatory agencies for human radiation exposure. Radiation hormesis proposes that radiation exposure comparable to and just above the natural background level of radiation is not harmful but beneficial, while accepting that much higher levels of radiation are hazardous. Proponents of radiation hormesis typically claim that radio-protective responses in cells and the immune system not only counter the harmful effects of radiation but additionally act to inhibit spontaneous cancer not related to radiation exposure. Radiation hormesis stands in stark contrast to the more generally accepted linear no-threshold model (LNT), which states that the radiation dose-risk relationship is linear across all doses, so that small doses are still damaging, albeit less so than higher ones. Opinion pieces on chemical and radiobiological hormesis appeared in the journals Nature [ 1 ] and Science [ 3 ] in 2003. Assessing the risk of radiation at low doses (<100 mSv ) and low dose rates (<0.1 mSv .min −1 ) is highly problematic and controversial. [ 18 ] [ 19 ] While epidemiological studies on populations of people exposed to an acute dose of high level radiation such as Japanese atomic bomb survivors (hibakusha ( 被爆者 ) ) have robustly upheld the LNT (mean dose ~210 mSv), [ 20 ] studies involving low doses and low dose rates have failed to detect any increased cancer rate. [ 19 ] This is because the baseline cancer rate is already very high (~42 of 100 people will be diagnosed in their lifetime) and it fluctuates ~40% because of lifestyle and environmental effects, [ 7 ] [ 21 ] obscuring the subtle effects of low level radiation. Epidemiological studies may be capable of detecting elevated cancer rates as low as 1.2 to 1.3 i.e. 20% to 30% increase. But for low doses (1–100 mSv) the predicted elevated risks are only 1.001 to 1.04 and excess cancer cases, if present, cannot be detected due to confounding factors, errors and biases. [ 21 ] [ 22 ] [ 23 ] In particular, variations in smoking prevalence or even accuracy in reporting smoking cause wide variation in excess cancer and measurement error bias. Thus, even a large study of many thousands of subjects with imperfect smoking prevalence information will fail to detect the effects of low level radiation than a smaller study that properly compensates for smoking prevalence. [ 24 ] Given the absence of direct epidemiological evidence, there is considerable debate as to whether the dose-response relationship <100 mSv is supralinear, linear (LNT), has a threshold, is sub-linear , or whether the coefficient is negative with a sign change, i.e. a hormetic response. The radiation adaptive response seems to be a main origin of the potential hormetic effect. The theoretical studies indicate that the adaptive response is responsible for the shape of dose-response curve and can transform the linear relationship (LNT) into the hormetic one. [ 25 ] [ 26 ] While most major consensus reports and government bodies currently adhere to LNT, [ 27 ] the 2005 French Academy of Sciences - National Academy of Medicine 's report concerning the effects of low-level radiation rejected LNT as a scientific model of carcinogenic risk at low doses. [ 15 ] Using LNT to estimate the carcinogenic effect at doses of less than 20 mSv is not justified in the light of current radiobiologic knowledge. They consider there to be several dose-effect relationships rather than only one, and that these relationships have many variables such as target tissue, radiation dose, dose rate and individual sensitivity factors. They request that further study is required on low doses (less than 100 mSv ) and very low doses (less than 10 mSv ) as well as the impact of tissue type and age. The Academy considers the LNT model is only useful for regulatory purposes as it simplifies the administrative task. Quoting results from literature research, [ 13 ] [ 14 ] they furthermore claim that approximately 40% of laboratory studies on cell cultures and animals indicate some degree of chemical or radiobiological hormesis, and state: ...its existence in the laboratory is beyond question and its mechanism of action appears well understood. They go on to outline a growing body of research that illustrates that the human body is not a passive accumulator of radiation damage but it actively repairs the damage caused via a number of different processes, including: [ 15 ] [ 19 ] Furthermore, increased sensitivity to radiation induced cancer in the inherited condition Ataxia-telangiectasia like disorder , illustrates the damaging effects of loss of the repair gene Mre11h resulting in the inability to fix DNA double-strand breaks. [ 28 ] The BEIR-VII report argued that, "the presence of a true dose threshold demands totally error-free DNA damage response and repair." The specific damage they worry about is double strand breaks (DSBs) and they continue, "error-prone nonhomologous end joining (NHEJ) repair in postirradiation cellular response, argues strongly against a DNA repair-mediated low-dose threshold for cancer initiation". [ 29 ] Recent research observed that DSBs caused by CAT scans are repaired within 24 hours and DSBs may be more efficiently repaired at low doses, suggesting that the risk of ionizing radiation at low doses may not be directly proportional to the dose. [ 30 ] [ 31 ] However, it is not known if low-dose ionizing radiation stimulates the repair of DSBs not caused by ionizing radiation i.e. a hormetic response. Radon gas in homes is the largest source of radiation dose for most individuals and it is generally advised that the concentration be kept below 150 Bq/m³ (4 pCi/L). [ 32 ] A recent retrospective case-control study of lung cancer risk showed substantial cancer rate reduction between 50 and 123 Bq per cubic meter relative to a group at zero to 25 Bq per cubic meter. [ 33 ] This study is cited as evidence for hormesis, but a single study all by itself cannot be regarded as definitive. Other studies into the effects of domestic radon exposure have not reported a hormetic effect; including for example the respected "Iowa Radon Lung Cancer Study" of Field et al. (2000), which also used sophisticated radon exposure dosimetry . [ 34 ] In addition, Darby et al. (2005) argue that radon exposure is negatively correlated with the tendency to smoke and environmental studies need to accurately control for this; people living in urban areas where smoking rates are higher usually have lower levels of radon exposure due to the increased prevalence of multi-story dwellings. [ 35 ] When doing so, they found a significant increase in lung cancer amongst smokers exposed to radon at doses as low as 100 to 199 Bq m −3 and warned that smoking greatly increases the risk posed by radon exposure i.e. reducing the prevalence of smoking would decrease deaths caused by radon. [ 35 ] [ 36 ] However, the discussion about the opposite experimental results is still going on, [ 37 ] especially the popular US and German studies have found some hormetic effects. [ 38 ] [ 39 ] Furthermore, particle microbeam studies show that passage of even a single alpha particle (e.g. from radon and its progeny) through cell nuclei is highly mutagenic, [ 40 ] and that alpha radiation may have a higher mutagenic effect at low doses (even if a small fraction of cells are hit by alpha particles) than predicted by linear no-threshold model, a phenomenon attributed to bystander effect . [ 41 ] However, there is currently insufficient evidence at hand to suggest that the bystander effect promotes carcinogenesis in humans at low doses. [ 42 ] Radiation hormesis has not been accepted by either the United States National Research Council , [ 17 ] or the National Council on Radiation Protection and Measurements (NCRP) . [ 43 ] In May 2018, the NCRP published the report of an interdisciplinary group of radiation experts who critically reviewed 29 high-quality epidemiologic studies of populations exposed to radiation in the low dose and low dose-rate range, mostly published within the last 10 years. [ 44 ] The group of experts concluded: The recent epidemiologic studies support the continued use of the LNT model for radiation protection. This is in accord with judgments by other national and international scientific committees, based on somewhat older data, that no alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the LNT model. In addition, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) wrote in its 2000 report: [ 45 ] Until the [...] uncertainties on low-dose response are resolved, the Committee believes that an increase in the risk of tumour induction proportionate to the radiation dose is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances. This is a reference to the fact that very low doses of radiation have only marginal impacts on individual health outcomes. It is therefore difficult to detect the 'signal' of decreased or increased morbidity and mortality due to low-level radiation exposure in the 'noise' of other effects. The notion of radiation hormesis has been rejected by the National Research Council's (part of the National Academy of Sciences) 16-year-long study on the Biological Effects of Ionizing Radiation. "The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial. The health risks – particularly the development of solid cancers in organs – rise proportionally with exposure" says Richard R. Monson, associate dean for professional education and professor of epidemiology, Harvard School of Public Health, Boston. [ 46 ] [ 17 ] The possibility that low doses of radiation may have beneficial effects (a phenomenon often referred to as "hormesis") has been the subject of considerable debate. Evidence for hormetic effects was reviewed, with emphasis on material published since the 1990 BEIR V study on the health effects of exposure to low levels of ionizing radiation. Although examples of apparent stimulatory or protective effects can be found in cellular and animal biology, the preponderance of available experimental information does not support the contention that low levels of ionizing radiation have a beneficial effect. The mechanism of any such possible effect remains obscure. At this time, the assumption that any stimulatory hormetic effects from low doses of ionizing radiation will have a significant health benefit to humans that exceeds potential detrimental effects from radiation exposure at the same dose is unwarranted. Kerala's monazite sand (containing a third of the world's economically recoverable reserves of radioactive thorium ) emits about 8 micro sieverts per hour of gamma radiation, 80 times the dose rate equivalent in London, but a decade-long study of 69,985 residents published in Health Physics in 2009 "showed no excess cancer risk from exposure to terrestrial gamma radiation. The excess relative risk of cancer excluding leukemia was estimated to be −0.13 per Gy (95% CI: −0.58, 0.46)", indicating no statistically significant positive or negative relationship between background radiation levels and cancer risk in this sample. [ 47 ] Studies in cell cultures can be useful for finding mechanisms for biological processes, but they also can be criticized for not effectively capturing the whole of the living organism. A study by E. I. Azzam suggested that pre-exposure to radiation causes cells to turn on protection mechanisms. [ 48 ] A different study by de Toledo and collaborators has shown that irradiation with gamma rays increases the concentration of glutathione, an antioxidant found in cells. [ 49 ] In 2011, an in vitro study led by S. V. Costes showed in time-lapse images a strongly non-linear response of certain cellular repair mechanisms called radiation-induced foci (RIF). The study found that low doses of radiation prompted higher rates of RIF formation than high doses, and that after low-dose exposure RIF continued to form after the radiation had ended. Measured rates of RIF formation were 15 RIF/ Gy at 2 Gy, and 64 RIF/Gy at 0.1 Gy. [ 31 ] These results suggest that low dose levels of ionizing radiation may not increase cancer risk directly proportional to dose and thus contradict the linear-no-threshold standard model. [ 50 ] Mina Bissell , a world-renowned breast-cancer researcher and collaborator in this study stated: "Our data show that at lower doses of ionizing radiation, DNA repair mechanisms work much better than at higher doses. This non-linear DNA damage response casts doubt on the general assumption that any amount of ionizing radiation is harmful and additive." [ 50 ] An early study on mice exposed to low dose of radiation daily (0.11 R per day) suggest that they may outlive control animals. [ 51 ] A study by Otsuka and collaborators found hormesis in animals. [ 52 ] Miyachi conducted a study on mice and found that a 200 mGy X-ray dose protects mice against both further X-ray exposure and ozone gas. [ 53 ] In another rodent study, Sakai and collaborators found that (1 mGy/ h ) gamma irradiation prevents the development of cancer (induced by chemical means, injection of methylcholanthrene ). [ 54 ] In a 2006 paper, [ 55 ] a dose of 1 Gy was delivered to the cells (at constant rate from a radioactive source) over a series of lengths of time. These were between 8.77 and 87.7 hours, the abstract states for a dose delivered over 35 hours or more (low dose rate) no transformation of the cells occurred. Also for the 1 Gy dose delivered over 8.77 to 18.3 hours that the biological effect (neoplastic transformation) was about "1.5 times less than that measured at high dose rate in previous studies with a similar quality of [X-ray] radiation". Likewise it has been reported that fractionation of gamma irradiation reduces the likelihood of a neoplastic transformation. [ 56 ] Pre-exposure to fast neutrons and gamma rays from Cs-137 is reported to increase the ability of a second dose to induce a neoplastic transformation. [ 57 ] Caution must be used in interpreting these results, as it noted in the BEIR VII report, these pre-doses can also increase cancer risk: [ 17 ] In chronic low-dose experiments with dogs (75 mGy/d for the duration of life), vital hematopoietic progenitors showed increased radioresistance along with renewed proliferative capacity (Seed and Kaspar 1992). Under the same conditions, a subset of animals showed an increased repair capacity as judged by the unscheduled DNA synthesis assay (Seed and Meyers 1993). Although one might interpret these observations as an adaptive effect at the cellular level, the exposed animal population experienced a high incidence of myeloid leukemia and related myeloproliferative disorders. The authors concluded that "the acquisition of radioresistance and associated repair functions under the strong selective and mutagenic pressure of chronic radiation is tied temporally and causally to leukemogenic transformation by the radiation exposure" (Seed and Kaspar 1992). However, 75 mGy/d cannot be accurately described as a low dose rate – it is equivalent to over 27 sieverts per year. The same study on dogs showed no increase in cancer nor reduction in life expectancy for dogs irradiated at 3 mGy/d. [ 58 ] In long-term study of Chernobyl disaster liquidators [ 59 ] was found that: "During current research paradoxically longer telomeres were found among persons, who have received heavier long-term irradiation." and "Mortality due to oncologic diseases was lower than in general population in all age groups that may reflect efficient health care of this group." Though in conclusion interim results were ignored and conclusion followed LNT hypothesis: "The signs of premature aging were found in Chernobyl disaster clean-up workers; moreover, aging process developed in heavier form and at younger age in humans, who underwent greater exposure to ionizing radiation." A study of survivors of the Hirsohima atomic bomb explosion yielded similar results. [ 60 ] In an Australian study which analyzed the association between solar UV exposure and DNA damage, the results indicated that although the frequency of cells with chromosome breakage increased with increasing sun exposure , the misrepair of DNA strand breaks decreased as sun exposure was heightened. [ 61 ] The health of the inhabitants of radioactive apartment buildings in Taiwan has received prominent attention. In 1982, more than 20,000 tons of steel was accidentally contaminated with cobalt-60 , and much of this radioactive steel was used to build apartments and exposed thousands of Taiwanese to gamma radiation levels of up to >1000 times background (average 47.7 mSv, maximum 2360 mSv excess cumulative dose). The radioactive contamination was discovered in 1992. A seriously flawed 2004 study compared the building's younger residents with the much older general population of Taiwan and determined that the younger residents were less likely to have been diagnosed with cancer than older people; this was touted as evidence of a radiation hormesis effect. [ 62 ] [ 63 ] (Older people have much higher cancer rates even in the absence of excess radiation exposure.) In the years shortly after exposure, the total number cancer cases have been reported to be either lower than the society-wide average or slightly elevated. [ 64 ] [ 65 ] Leukaemia and thyroid cancer were substantially elevated. [ 62 ] [ 64 ] When a lower rate of "all cancers" was found, it was thought to be due to the exposed residents having a higher socioeconomic status , and thus overall healthier lifestyle. [ 62 ] [ 64 ] Additionally, Hwang, et al. cautioned in 2006 that leukaemia was the first cancer type found to be elevated amongst the survivors of the Hiroshima and Nagasaki bombings, so it could be decades before any increase in more common cancer types is seen. [ 62 ] Besides the excess risks of leukaemia and thyroid cancer, a later publication notes various DNA anomalies and other health effects among the exposed population: [ 66 ] There have been several reports concerning the radiation effects on the exposed population, including cytogenetic analysis that showed increased micronucleus frequencies in peripheral lymphocytes in the exposed population, increases in acentromeric and single or multiple centromeric cytogenetic damages, and higher frequencies of chromosomal translocations, rings and dicentrics. Other analyses have shown persistent depression of peripheral leucocytes and neutrophils, increased eosinophils, altered distributions of lymphocyte subpopulations, increased frequencies of lens opacities, delays in physical development among exposed children, increased risk of thyroid abnormalities, and late consequences in hematopoietic adaptation in children. People living in these buildings also experienced infertility. [ 67 ] Intentional exposure to water and air containing increased amounts of radon is perceived as therapeutic, and "radon spas" can be found in United States, Czechia, Poland, Germany, Austria and other countries. Given the uncertain effects of low-level and very-low-level radiation, there is a pressing need for quality research in this area. An expert panel convened at the 2006 Ultra-Low-Level Radiation Effects Summit at Carlsbad, New Mexico, proposed the construction of an Ultra-Low-Level Radiation laboratory . [ 68 ] The laboratory, if built, will investigate the effects of almost no radiation on laboratory animals and cell cultures , and it will compare these groups to control groups exposed to natural radiation levels. Precautions would be made, for example, to remove potassium-40 from the food of laboratory animals. The expert panel believes that the Ultra-Low-Level Radiation laboratory is the only experiment that can explore with authority and confidence the effects of low-level radiation; that it can confirm or discard the various radiobiological effects proposed at low radiation levels e.g. LNT , threshold and radiation hormesis. [ 69 ] The first preliminary results of the effects of almost no-radiation on cell cultures was reported by two research groups in 2011 and 2012; researchers in the US studied cell cultures protected from radiation in a steel chamber 650 meters underground at the Waste Isolation Pilot Plant in Carlsbad, New Mexico [ 70 ] and researchers in Europe proposed an experiment design to study the effects of almost no-radiation on mouse cells (pKZ1 transgenic chromosomal inversion assay), but did not carry out the experiment. [ 71 ]
https://en.wikipedia.org/wiki/Radiation_hormesis
Radiation materials science is a subfield of materials science which studies the interaction of radiation with matter : a broad subject covering many forms of irradiation and of matter. Some of the most profound effects of irradiation on materials occur in the core of nuclear power reactors where atoms comprising the structural components are displaced numerous times over the course of their engineering lifetimes. The consequences of radiation to core components includes changes in shape and volume by tens of percent, increases in hardness by factors of five or more, severe reduction in ductility and increased embrittlement , and susceptibility to environmentally induced cracking. For these structures to fulfill their purpose, a firm understanding of the effect of radiation on materials is required in order to account for irradiation effects in design, to mitigate its effect by changing operating conditions, or to serve as a guide for creating new, more radiation-tolerant materials that can better serve their purpose. The types of radiation that can alter structural materials are neutron radiation , ion beams , electrons ( beta particles ), and gamma rays . All of these forms of radiation have the capability to displace atoms from their lattice sites, which is the fundamental process that drives the changes in structural metals. The inclusion of ions among the irradiating particles provides a tie-in to other fields and disciplines such as the use of accelerators for the transmutation of nuclear waste , or in the creation of new materials by ion implantation , ion beam mixing , plasma-assisted ion implantation , and ion beam-assisted deposition . The effect of irradiation on materials is rooted in the initial event in which an energetic projectile strikes a target. While the event is made up of several steps or processes, the primary result is the displacement of an atom from its lattice site. Irradiation displaces an atom from its site, leaving a vacant site behind (a vacancy ) and the displaced atom eventually comes to rest in a location that is between lattice sites, becoming an interstitial atom. The vacancy-interstitial pair is central to radiation effects in crystalline solids and is known as a Frenkel pair . The presence of the Frenkel pair and other consequences of irradiation damage determine the physical effects, and with the application of stress , the mechanical effects of irradiation by the occurring of interstitial, phenomena, such as swelling , growth , phase transition , segregation , etc., will be effected. In addition to the atomic displacement, an energetic charged particle moving in a lattice also gives energy to electrons in the system, via the electronic stopping power . This energy transfer can also for high-energy particles produce damage in non-metallic materials, such as ion tracks and fission tracks in minerals. [ 1 ] [ 2 ] The radiation damage event is defined as the transfer of energy from an incident projectile to the solid and the resulting distribution of target atoms after completion of the event. This event is composed of several distinct processes: The result of a radiation damage event is, if the energy given to a lattice atom is above the threshold displacement energy , the creation of a collection of point defects (vacancies and interstitials) and clusters of these defects in the crystal lattice. The essence of the quantification of radiation damage in solids is the number of displacements per unit volume per unit time R {\displaystyle R} : where N {\displaystyle N} is the atom number density, E m a x {\displaystyle E_{max}} and E m i n {\displaystyle E_{min}} are the maximum and minimum energies of the incoming particle, ϕ ( E i ) {\displaystyle \phi (E_{i})} is the energy dependent particle flux, T m a x {\displaystyle T_{max}} and T m i n {\displaystyle T_{min}} are the maximum and minimum energies transferred in a collision of a particle of energy E i {\displaystyle E_{i}} and a lattice atom, σ ( E i , T ) {\displaystyle \sigma (E_{i},T)} is the cross section for the collision of a particle of energy E i {\displaystyle E_{i}} that results in a transfer of energy T {\displaystyle T} to the struck atom, υ ( T ) {\displaystyle \upsilon (T)} is the number of displacements per primary knock-on atom. The two key variables in this equation are σ ( E i , T ) {\displaystyle \sigma (E_{i},T)} and υ ( T ) {\displaystyle \upsilon (T)} . The term σ ( E i , T ) {\displaystyle \sigma (E_{i},T)} describes the transfer of energy from the incoming particle to the first atom it encounters in the target, the primary knock-on atom; The second quantity υ ( T ) {\displaystyle \upsilon (T)} is the total number of displacements that the primary knock-on atom goes on to make in the solid; Taken together, they describe the total number of displacements caused by an incoming particle of energy E i {\displaystyle E_{i}} , and the above equation accounts for the energy distribution of the incoming particles. The result is the total number of displacements in the target from a flux of particles with a known energy distribution. In radiation material science the displacement damage in the alloy ( [ d p a ] {\displaystyle \left[dpa\right]} = displacements per atom in the solid ) is a better representation of the effect of irradiation on materials properties than the fluence ( neutron fluence, [ M e V ] {\displaystyle \left[MeV\right]} ). See also Wigner effect . To generate materials that fit the increasing demands of nuclear reactors to operate with higher efficiency or for longer lifetimes, materials must be designed with radiation resistance in mind. In particular, Generation IV nuclear reactors operate at higher temperatures and pressures compared to modern pressurized water reactors , which account for a vast amount of western reactors. This leads to increased vulnerability to normal mechanical failure in terms of creep resistance as well as radiation damaging events such as neutron-induced swelling and radiation-induced segregation of phases . By accounting for radiation damage, reactor materials would be able to withstand longer operating lifetimes. This allows reactors to be decommissioned after longer periods of time, improving return on investment of reactors without compromising safety. This is of particular interest in developing commercial viability of advanced and theoretical nuclear reactors, and this goal can be accomplished through engineering resistance to these displacement events. Face-centered cubic metals such as austenitic steels and Ni-based alloys can benefit greatly from grain boundary engineering. Grain boundary engineering attempts to generate higher amounts of special grain boundaries, characterized by favorable orientations between grains. By increasing populations of low energy boundaries without increasing grain size, fracture mechanics of these face centered cubic metals can be changed to improve mechanical properties given a similar displacements per atom value versus non grain boundary engineered alloys. This method of treatment in particular yields better resistance to stress corrosion cracking and oxidation. [ 3 ] By using advanced methods of material selection , materials can be judged on criteria such as neutron-absorption cross sectional area. Selecting materials with minimum neutron-absorption can heavily minimize the number of displacements per atom that occur over a reactor material's lifetime. This slows the radiation embrittlement process by preventing mobility of atoms in the first place, proactively selecting materials that do not interact with the nuclear radiation as frequently. This can have a huge impact on total damage especially when comparing the materials of modern advanced reactors of zirconium to stainless steel reactor cores, which can differ in absorption cross section by an order of magnitude from more-optimal materials. [ 4 ] Example values for thermal neutron cross section are shown in the table below. [ 5 ] For nickel-chromium and iron-chromium alloys, short range order can be designed on the nano-scale (<5 nm) that absorbs the interstitial and vacancy's generated by primary knock-on atom events. This allows materials that mitigate the swelling that normally occurs in the presence of high displacements per atom and keep the overall volume percent change under the ten percent range. This occurs through generating a metastable phase that is in constant, dynamic equilibrium with surrounding material. This metastable phase is characterized by having an enthalpy of mixing that is effectively zero with respect to the main lattice. This allows phase transformation to absorb and disperse the point defects that typically accumulate in more rigid lattices. This extends the life of the alloy through making vacancy and interstitial creation less successful as constant neutron excitement in the form of displacement cascades transform the SRO phase, while the SRO reforms in the bulk solid solution. [ 6 ]
https://en.wikipedia.org/wiki/Radiation_material_science
Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to radiation or radioactive substances , and the interpretation of the results. [ 1 ] Environmental monitoring is the measurement of external dose rates due to sources in the environment or of radionuclide concentrations in environmental media. Source monitoring is a specific term used in ionising radiation monitoring, and according to the IAEA , is the measurement of activity in radioactive material being released to the environment or of external dose rates due to sources within a facility or activity. In this context a source is anything that may cause radiation exposure — such as by emitting ionising radiation, or releasing radioactive substances. The phrase "standard source" is also used as a de facto term in the more specific context of being a calibration standard source in ionising radiation metrology . The methodological and technical details of the design and operation of source and environmental radiation monitoring programmes and systems for different radionuclides, environmental media and types of facility are given in IAEA Safety Standards Series No. RS–G-1.8 [ 2 ] and in IAEA Safety Reports Series No. 64. [ 3 ] Practical radiation measurement using calibrated radiation protection instruments is essential in evaluating the effectiveness of protection measures, and in assessing the radiation dose likely to be received by individuals. The measuring instruments for radiation protection are both "installed" (in a fixed position) and portable (hand-held or transportable). Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne particulate monitors. The area radiation monitor will measure the ambient radiation , usually X-ray , Gamma or neutrons; these are radiations which can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Gamma radiation "interlock monitors" are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. These interlock the process access directly. Airborne contamination monitors measure the concentration of radioactive particles in the ambient air to guard against radioactive particles being ingested, or deposited in the lungs of personnel. These instruments will normally give a local alarm, but are often connected to an integrated safety system so that areas of plant can be evacuated and personnel are prevented from entering an air of high airborne contamination. "Personnel exit monitors" (PEM) are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory publishes a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used. [ 4 ] Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations. In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. [ 5 ] This covers all radiation instrument technologies, and is a useful comparative guide. A number of commonly used detection instruments are listed below. The links should be followed for a fuller description of each.
https://en.wikipedia.org/wiki/Radiation_monitoring
Radiation Portal Monitors ( RPM s) are passive radiation detection devices used for the screening of individuals, vehicles, cargo or other vectors for detection of illicit sources such as at borders or secure facilities. Fear of terrorist attacks with radiological weapons spurred RPM deployment for cargo scanning since 9/11 , particularly in the United States. RPMs were originally developed for screening individuals and vehicles at secure facilities such as weapons laboratories. [ 1 ] They were deployed at scrap metal facilities to detect radiation sources mixed among scrap that could contaminate a facility and result in a costly clean up. [ citation needed ] As part of the effort to thwart nuclear smuggling after the breakup of the Soviet Union, RPMs were deployed around that territory, and later around many other European and Asian countries, by the US Department of Energy (DOE) National Nuclear Security Administration (NNSA) Second Line of Defense Program (SLD) [ 2 ] starting in the late 1990s. After the attack of 9/11, the US Customs and Border Protection (CBP) started the Radiation Portal Monitor Program (RPMP) to deploy RPMs around all US borders (land, sea and air). [ 3 ] Radiation Portal Monitor (RPM) was designed to detect traces of radiation emitted from an object passing through a RPM. Gamma radiation is detected, and in some cases complemented by neutron detection when sensitivity for nuclear material is desired. [ 4 ] First generation RPMs often rely on PVT scintillators for gamma counting. They provide limited information on energy of detected photons, and as a result, they were criticized for their inability to distinguish gamma rays originating from nuclear sources from gamma rays originating from a large variety of benign cargo types that naturally emit radioactivity, including cat litter , granite , porcelain , stoneware , bananas etc. [ 5 ] Those Naturally Occurring Radioactive Materials , called NORMs account for 99% of nuisance alarms. [ 6 ] It is worth noting that bananas have erroneously been reported as the source of radiation alarms; they are not. Most produce contains potassium-40, but packing density of fruits and vegetables is too low to produce a significant signal. PVT does have the ability to provide some energy discrimination, which can be exploited to limit nuisance alarms from NORM. [ 7 ] In attempt to reduce the high nuisance alarm rates of first generation RPMs, the Advanced Spectroscopic Portal (ASP) program was called into life. Some of the portal monitors evaluated for this purposes are based on NaI(Tl) scintillating crystals. These devices, having better energy resolution than PVT, were supposed to reduce nuisance alarm rates by distinguishing threats from benign sources on the basis of the detected gamma radiation spectra. ASPs based on NaI(Tl) had a cost several times that of first generation RPMs. To date, NaI(Tl) based ASPs have not been able to demonstrate significantly better performance than PVT based RPMs. [ 8 ] The ASP program was canceled in 2011 [ 9 ] after continued problems, including a high rate of false positives and difficulty maintaining stable operation. [ 10 ] In the scope of the ASP program, high purity germanium (HPGe) based portal monitors were evaluated. HPGe, having significantly better energy resolution than NaI(Tl), allows rather precise measurement of the isotopes contributing to gamma ray spectra. However, due to very high costs and major constraints such as cryo-cooling requirements, US government support for HPGe based portal monitors was dropped. RPMs geared for interception of nuclear threats usually incorporate a neutron detection technology. The vast majority of all neutron detectors deployed in RPMs to date relies on He-3 tubes surrounded by neutron moderators . Since the end of 2009, however, the global He-3 supply crisis [ 11 ] has made this technology unavailable. The search for alternative neutron detection technologies has yielded satisfactory results. [ 12 ] The latest technology being deployed at ports [ 13 ] uses pressurized natural helium to directly detect fast neutrons, without the need for bulky neutron moderators . Utilizing recoil nuclei following neutron scatter events, natural helium glows (scintillates), allowing photomultipliers (e.g. SiPMs) to produce an electrical signal. [ 14 ] Introducing moderators and lithium-6 to capture thermalized neutrons further increases the detection capabilities of natural helium, at the expense of losing the initial information of the neutrons (such as energy) and reducing sensitivity to shielded neutron-emitting materials. RPMs are deployed with the aim to intercept radiological threats as well as to deter malicious groups from deploying such threats. Radiological dispersal devices (RDDs) are weapons of mass disruption rather than weapons of mass destruction. " Dirty bombs " are examples of RDDs. As the name suggests, an RDD aims at dispersing radioactive material over an area, causing high cleanup costs, psychological, and economic damage. Nevertheless, direct human losses caused by RDDs are low and not attributed to the radiological aspect. RDDs are easily fabricated and components readily obtainable. RDDs are comparatively easy to detect with RPMs due to their high level of radioactivity. RDDs emit gamma radiation as well as sometimes, depending on what isotopes are used, neutrons. Improvised nuclear devices (INDs) and nuclear weapons are weapons of mass destruction. They are difficult to acquire, manufacture, refurbish, and handle. While INDs can be constructed to emit only low amounts of radiation making them difficult to detect with RPMs, all INDs emit some amounts of gamma and neutron radiation. Gamma radiation as well as neutron radiation can cause RPMs to trigger an alarm procedure. Alarms caused by statistical fluctuations of detection rates are referred to as false alarms. Alarms caused by benign radioactive sources are referred to as nuisance alarms. Causes of nuisance alarms can be broken up into several large categories: This article relates primarily to RPMs deployed for screening trucks at ports of entry. Over 1400 RPMs are deployed at US borders and a similar number at foreign locations for the purpose of interdicting illicit radiological and nuclear material. The US deployments cover all land border vehicles, all seaport containerized cargo, and all mail and express courier facilities. Efforts are also being made to deploy similar measures to other cross border vectors including: RPMs are also deployed at civilian and military nuclear facilities to prevent theft of radiological materials. Steel mills often use RPMs to screen incoming scrap metal to avoid radioactive sources illegally disposed in this way. Garbage incineration plants often monitor incoming material to avoid contamination.
https://en.wikipedia.org/wiki/Radiation_portal_monitor
Radiation pressure (also known as light pressure ) is mechanical pressure exerted upon a surface due to the exchange of momentum between the object and the electromagnetic field . This includes the momentum of light or electromagnetic radiation of any wavelength that is absorbed , reflected , or otherwise emitted (e.g. black-body radiation ) by matter on any scale (from macroscopic objects to dust particles to gas molecules). [ 1 ] [ 2 ] [ 3 ] The associated force is called the radiation pressure force , or sometimes just the force of light . The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes and technologies. This particularly includes objects in outer space , where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the Sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars orbit by about 15,000 km (9,300 mi). [ 4 ] Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures and can sometimes dwarf the usual gas pressure , for instance, in stellar interiors and thermonuclear weapons . Furthermore, large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion . Radiation pressure forces are the bedrock of laser technology and the branches of science that rely heavily on lasers and other optical technologies . That includes, but is not limited to, biomicroscopy (where light is used to irradiate and observe microbes, cells, and molecules), quantum optics , and optomechanics (where light is used to probe and control objects like atoms, qubits and macroscopic quantum objects). Direct applications of the radiation pressure force in these fields are, for example, laser cooling (the subject of the 1997 Nobel Prize in Physics ), [ 5 ] quantum control of macroscopic objects and atoms (2012 Nobel Prize in Physics), [ 6 ] interferometry (2017 Nobel Prize in Physics) [ 7 ] and optical tweezers (2018 Nobel Prize in Physics). [ 8 ] Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons , particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum . Due to the law of conservation of momentum , any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with ( Newton's third law of motion ), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure. Johannes Kepler put forward the concept of radiation pressure in 1619 to explain the observation that a tail of a comet always points away from the Sun. [ 9 ] The assertion that light, as electromagnetic radiation , has the property of momentum and thus exerts a pressure upon any surface that is exposed to it was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 [ 10 ] and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. [ 11 ] The pressure is very small, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer , whose characteristic motion is not caused by radiation pressure but by air flow caused by temperature differentials.) Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below. According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum. Momentum will be transferred to any surface it strikes that absorbs or reflects the radiation. Consider the momentum transferred to a perfectly absorbing (black) surface. The energy flux (irradiance) of a plane wave is calculated using the Poynting vector S = E × H {\displaystyle \mathbf {S} =\mathbf {E} \times \mathbf {H} } , which is the cross product of the electric field vector E and the magnetic field 's auxiliary field vector (or magnetizing field ) H . The magnitude, denoted by S , divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field. So, dimensionally, the Poynting vector is S = ⁠ power / area ⁠ = ⁠ rate of doing work / area ⁠ = ⁠ ⁠ Δ F / Δ t ⁠ Δ x / area ⁠ , which is the speed of light, c = Δ x / Δ t , times pressure, Δ F / area . That pressure is experienced as radiation pressure on the surface: P incident = ⟨ S ⟩ c = I f c {\displaystyle P_{\text{incident}}={\frac {\langle S\rangle }{c}}={\frac {I_{f}}{c}}} where P {\displaystyle P} is pressure (usually in pascals ), I f {\displaystyle I_{f}} is the incident irradiance (usually in W/m 2 ) and c {\displaystyle c} is the speed of light in vacuum. Here, ⁠ 1 / c ⁠ ≈ 3.34 N/GW . If the surface is planar at an angle α to the incident wave, the intensity across the surface will be geometrically reduced by the cosine of that angle and the component of the radiation force against the surface will also be reduced by the cosine of α , resulting in a pressure: P incident = I f c cos 2 ⁡ α {\displaystyle P_{\text{incident}}={\frac {I_{f}}{c}}\cos ^{2}\alpha } The momentum from the incident wave is in the same direction of that wave. But only the component of that momentum normal to the surface contributes to the pressure on the surface, as given above. The component of that force tangent to the surface is not called pressure. [ 12 ] The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected , then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave: P emitted = I f c {\displaystyle P_{\text{emitted}}={\frac {I_{f}}{c}}} thus doubling the net radiation pressure on the surface: P net = P incident + P emitted = 2 I f c {\displaystyle P_{\text{net}}=P_{\text{incident}}+P_{\text{emitted}}=2{\frac {I_{f}}{c}}} For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double. Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface I e : P emitted = I e c {\displaystyle P_{\text{emitted}}={\frac {I_{\text{e}}}{c}}} The emission can be from black-body radiation or any other radiative mechanism. Since all materials emit black-body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually tiny. However, because black-body radiation increases rapidly with temperature (as the fourth power of temperature, given by the Stefan–Boltzmann law ), radiation pressure due to the temperature of a very hot object (or due to incoming black-body radiation from similarly hot surroundings) can become significant. This is important in stellar interiors. Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons . Photons do not have a rest-mass; however, photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by: p = h λ = E p c , {\displaystyle p={\dfrac {h}{\lambda }}={\frac {E_{p}}{c}},} where p is momentum, h is the Planck constant , λ is wavelength , and c is speed of light in vacuum. And E p is the energy of a single photon given by: E p = h ν = h c λ {\displaystyle E_{p}=h\nu ={\frac {hc}{\lambda }}} The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance I f over an area A has a power of I f A , this implies a flux of I f / E p photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically. In general, the pressure of electromagnetic waves can be obtained from the vanishing of the trace of the electromagnetic stress tensor : since this trace equals 3 P − u , we get P = u 3 , {\displaystyle P={\frac {u}{3}},} where u is the radiation energy per unit volume. This can also be shown in the specific case of the pressure exerted on surfaces of a body in thermal equilibrium with its surroundings, at a temperature T : the body will be surrounded by a uniform radiation field described by the Planck black-body radiation law and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black-body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space. [ 13 ] [ 14 ] [ 15 ] [ 16 ] By using Stefan–Boltzmann law , this can be expressed as P compress = u 3 = 4 σ 3 c T 4 , {\displaystyle P_{\text{compress}}={\frac {u}{3}}={\frac {4\sigma }{3c}}T^{4},} where σ {\displaystyle \sigma } is the Stefan–Boltzmann constant . Solar radiation pressure is due to the Sun's radiation at closer distances, thus especially within the Solar System . While it acts on all objects, its net effect is generally greater on smaller bodies, since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure, except when they are behind the shadow of a larger orbiting body . Solar radiation pressure on objects near the Earth may be calculated using the Sun's irradiance at 1 AU , known as the solar constant , or G SC , whose value is set at 1361 W / m 2 as of 2011. [ 17 ] All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation . This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail , for instance. Momentary or hours long solar pressures can indeed escalate due to release of solar flares and coronal mass ejections , but effects remain essentially immeasureable in relation to Earth's orbit. However these pressures persist over eons, such that cumulatively having produced a measurable movement on the Earth-Moon system's orbit. Solar radiation pressure at the Earth's distance from the Sun, may be calculated by dividing the solar constant G SC (above) by the speed of light c . For an absorbing sheet facing the Sun, this is simply: [ 18 ] P = G SC c ≈ 4.5 ⋅ 10 − 6 Pa = 4.5 μ Pa . {\displaystyle P={\frac {G_{\text{SC}}}{c}}\approx 4.5\cdot 10^{-6}~{\text{Pa}}=4.5~\mu {\text{Pa}}.} This result is in pascals , equivalent to N/m 2 ( newtons per square meter). For a sheet at an angle α to the Sun, the effective area A of a sheet is reduced by a geometrical factor resulting in a force in the direction of the sunlight of: F = G SC c ( A cos ⁡ α ) . {\displaystyle F={\frac {G_{\text{SC}}}{c}}(A\cos \alpha ).} To find the component of this force normal to the surface, another cosine factor must be applied resulting in a pressure P on the surface of: P = F cos ⁡ α A = G SC c cos 2 ⁡ α . {\displaystyle P={\frac {F\cos \alpha }{A}}={\frac {G_{\text{SC}}}{c}}\cos ^{2}\alpha .} Note, however, that in order to account for the net effect of solar radiation on a spacecraft for instance, one would need to consider the total force (in the direction away from the Sun) given by the preceding equation, rather than just the component normal to the surface that we identify as "pressure". The solar constant is defined for the Sun's radiation at the distance to the Earth, also known as one astronomical unit (au). Consequently, at a distance of R astronomical units ( R thus being dimensionless), applying the inverse-square law , we would find: P = G SC c R 2 cos 2 ⁡ α . {\displaystyle P={\frac {G_{\text{SC}}}{cR^{2}}}\cos ^{2}\alpha .} Finally, considering not an absorbing but a perfectly reflecting surface, the pressure is doubled due to the reflected wave, resulting in: P = 2 G SC c R 2 cos 2 ⁡ α . {\displaystyle P=2{\frac {G_{\text{SC}}}{cR^{2}}}\cos ^{2}\alpha .} Note that unlike the case of an absorbing material, the resulting force on a reflecting body is given exactly by this pressure acting normal to the surface, with the tangential forces from the incident and reflecting waves canceling each other. In practice, materials are neither totally reflecting nor totally absorbing, so the resulting force will be a weighted average of the forces calculated using these formulas. Solar radiation pressure is a source of orbital perturbations . It significantly affects the orbits and trajectories of small bodies including all spacecraft. Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large ones because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules). The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun. [ 19 ] A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects. At any particular time, some facets are exposed to the Sun, and some are in shadow. Each surface exposed to the Sun is reflecting, absorbing, and emitting radiation. Facets in shadow are emitting radiation. The summation of pressures across all of the facets defines the net force and torque on the body. These can be calculated using the equations in the preceding sections. [ 12 ] [ 18 ] The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face is more intense than that of the opposite face, resulting in a net force on the body that affects its motion. [ 20 ] The YORP effect is a collection of effects expanding upon the earlier concept of the Yarkovsky effect, but of a similar nature. It affects the spin properties of bodies. [ citation needed ] The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction ( aberration of light ). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny, since the radiation is moving at the speed of light, while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System. While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons , and are essential in the theory of electron emission from the Sun, of cometary material, and so on. Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty ( micrometre -size) particles are susceptible to radiation pressure even in the outer Solar System. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure. As a consequence of light pressure, Einstein [ 21 ] in 1909 predicted the existence of "radiation friction", which would oppose the movement of matter. He wrote: "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief." Solar sailing, an experimental method of spacecraft propulsion , uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in his 1865 novel From the Earth to the Moon . A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance. The Japan Aerospace Exploration Agency ( JAXA ) has successfully unfurled a solar sail in space, which has already succeeded in propelling its payload with the IKAROS project. Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales. [ 22 ] The photon epoch is a phase when the energy of the universe was dominated by photons, between 10 seconds and 380,000 years after the Big Bang . [ 23 ] The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material. [ 24 ] The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions. Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters . Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster. Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal. Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars . As a branch of astronomy , star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function . Planetary systems are generally believed to form as part of the same process that results in star formation . A protoplanetary disk forms by gravitational collapse of a molecular cloud , called a solar nebula , and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure. In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun , and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component. [ 25 ] Solar radiation pressure strongly affects comet tails . Solar heating causes gases to be released from the comet nucleus , which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail. Lasers can be used as a source of monochromatic light with wavelength λ {\displaystyle \lambda } . With a set of lenses, one can focus the laser beam to a point that is λ {\displaystyle \lambda } in diameter (or r = λ / 2 {\displaystyle r=\lambda /2} ). The radiation pressure of a P = 30 mW laser with λ = 1064 nm can therefore be computed as follows. Area: A = π ( λ 2 ) 2 ≈ 10 − 12 m 2 , {\displaystyle A=\pi \left({\frac {\lambda }{2}}\right)^{2}\approx 10^{-12}{\text{ m}}^{2},} force: F = P c = 30 mW 299792458 m/s ≈ 10 − 10 N , {\displaystyle F={\frac {P}{c}}={\frac {30{\text{ mW}}}{299792458{\text{ m/s}}}}\approx 10^{-10}{\text{ N}},} pressure: p = F A ≈ 10 − 10 N 10 − 12 m 2 = 100 Pa . {\displaystyle p={\frac {F}{A}}\approx {\frac {10^{-10}{\text{ N}}}{10^{-12}{\text{ m}}^{2}}}=100{\text{ Pa}}.} This is used to trap or levitate particles in optical tweezers . The reflection of a laser pulse from the surface of an elastic solid can give rise to various types of elastic waves that propagate inside the solid or liquid. In other words, the light can excite and/or amplify motion of, and in, materials. This is the subject of study in the field of optomechanics. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Such light-pressure-induced elastic waves have for example observed inside an ultrahigh-reflectivity dielectric mirror . [ 26 ] These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale. [ 27 ] In the field of cavity optomechanics, light is trapped and resonantly enhanced in optical cavities , for example between mirrors. This serves the purpose of gravely enhancing the power of the light, and the radiation pressure it can exert on objects and materials. Optical control (that is, manipulation of the motion) of a plethora of objects has been realized: from kilometers long beams (such as in the LIGO interferometer ) [ 28 ] to clouds of atoms, [ 29 ] and from micro-engineered trampolines [ 30 ] to superfluids . [ 31 ] [ 32 ] Opposite to exciting or amplifying motion, light can also damp the motion of objects. Laser cooling is a method of cooling materials very close to absolute zero by converting some of material's motional energy into light. Kinetic energy and thermal energy of the material are synonyms here, because they represent the energy associated with Brownian motion of the material. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect. [ 34 ] An other active research area of laser–matter interaction is the radiation pressure acceleration of ions or protons from thin–foil targets. [ 35 ] High ion energy beams can be generated for medical applications (for example in ion beam therapy [ 36 ] ) by the radiation pressure of short laser pulses on ultra-thin foils.
https://en.wikipedia.org/wiki/Radiation_pressure
Radiation protection , also known as radiological protection , is defined by the International Atomic Energy Agency (IAEA) as "The protection of people from harmful effects of exposure to ionizing radiation , and the means for achieving this". [ 1 ] Exposure can be from a source of radiation external to the human body or due to internal irradiation caused by the ingestion of radioactive contamination . Ionizing radiation is widely used in industry and medicine, and can present a significant health hazard by causing microscopic damage to living tissue. There are two main categories of ionizing radiation health effects. At high exposures, it can cause "tissue" effects, also called "deterministic" effects due to the certainty of them happening, conventionally indicated by the unit gray and resulting in acute radiation syndrome . For low level exposures there can be statistically elevated risks of radiation-induced cancer , called " stochastic effects" due to the uncertainty of them happening, conventionally indicated by the unit sievert . Fundamental to radiation protection is the avoidance or reduction of dose using the simple protective measures of time, distance and shielding. The duration of exposure should be limited to that necessary, the distance from the source of radiation should be maximised, and the source or the target shielded wherever possible. To measure personal dose uptake in occupational or emergency exposure, for external radiation personal dosimeters are used, and for internal dose due to ingestion of radioactive contamination, bioassay techniques are applied. For radiation protection and dosimetry assessment the International Commission on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU) publish recommendations and data which is used to calculate the biological effects on the human body of certain levels of radiation, and thereby advise acceptable dose uptake limits. The ICRP recommends, develops and maintains the International System of Radiological Protection, based on evaluation of the large body of scientific studies available to equate risk to received dose levels. The system's health objectives are "to manage and control exposures to ionising radiation so that deterministic effects are prevented, and the risks of stochastic effects are reduced to the extent reasonably achievable". [ 2 ] The ICRP's recommendations flow down to national and regional regulators, which have the opportunity to incorporate them into their own law; this process is shown in the accompanying block diagram. In most countries a national regulatory authority works towards ensuring a secure radiation environment in society by setting dose limitation requirements that are generally based on the recommendations of the ICRP. The ICRP recognises planned, emergency, and existing exposure situations, as described below; [ 3 ] The ICRP uses the following overall principles for all controllable exposure situations. [ 7 ] There are three factors that control the amount, or dose, of radiation received from a source. Radiation exposure can be managed by a combination of these factors: Internal dose, due to the inhalation or ingestion of radioactive substances, can result in stochastic or deterministic effects, depending on the amount of radioactive material ingested and other biokinetic factors. The risk from a low level internal source is represented by the dose quantity committed dose , which has the same risk as the same amount of external effective dose . The intake of radioactive material can occur through four pathways: The occupational hazards from airborne radioactive particles in nuclear and radio-chemical applications are greatly reduced by the extensive use of gloveboxes to contain such material. To protect against breathing in radioactive particles in ambient air, respirators with particulate filters are worn. To monitor the concentration of radioactive particles in ambient air, radioactive particulate monitoring instruments measure the concentration or presence of airborne materials. For ingested radioactive materials in food and drink, specialist laboratory radiometric assay methods are used to measure the concentration of such materials. [ 9 ] The ICRP recommends a number of limits for dose uptake in table 8 of ICRP report 103. These limits are "situational", for planned, emergency and existing situations. Within these situations, limits are given for certain exposed groups; [ 10 ] The public information dose chart of the USA Department of Energy, shown here on the right, applies to USA regulation, which is based on ICRP recommendations. Note that examples in lines 1 to 4 have a scale of dose rate (radiation per unit time), whilst 5 and 6 have a scale of total accumulated dose. ALARP is an acronym for an important principle in exposure to radiation and other occupational health risks and in the UK stands for As Low As Reasonably Practicable. [ 12 ] The aim is to minimize the risk of radioactive exposure or other hazard while keeping in mind that some exposure may be acceptable in order to further the task at hand. The equivalent term ALARA , As Low As Reasonably Achievable, is more commonly used outside the UK. This compromise is well illustrated in radiology . The application of radiation can aid the patient by providing doctors and other health care professionals with a medical diagnosis, but the exposure of the patient should be reasonably low enough to keep the statistical probability of cancers or sarcomas (stochastic effects) below an acceptable level, and to eliminate deterministic effects (e.g. skin reddening or cataracts). An acceptable level of incidence of stochastic effects is considered to be equal for a worker to the risk in other radiation work generally considered to be safe. This policy is based on the principle that any amount of radiation exposure, no matter how small, can increase the chance of negative biological effects such as cancer . It is also based on the principle that the probability of the occurrence of negative effects of radiation exposure increases with cumulative lifetime dose. These ideas are combined to form the linear no-threshold model which says that there is not a threshold at which there is an increase in the rate of occurrence of stochastic effects with increasing dose. At the same time, radiology and other practices that involve use of ionizing radiation bring benefits, so reducing radiation exposure can reduce the efficacy of a medical practice. The economic cost, for example of adding a barrier against radiation, must also be considered when applying the ALARP principle. Computed tomography , better known as CT scans or CAT scans have made an enormous contribution to medicine, however not without some risk. The ionizing radiation used in CT scans can lead to radiation-induced cancer . [ 13 ] Age is a significant factor in risk associated with CT scans, [ 14 ] and in procedures involving children and systems that do not require extensive imaging, lower doses are used. [ 15 ] The radiation dosimeter is an important personal dose measuring instrument. It is worn by the person being monitored and is used to estimate the external radiation dose deposited in the individual wearing the device. They are used for gamma, X-ray, beta and other strongly penetrating radiation, but not for weakly penetrating radiation such as alpha particles. Traditionally, film badges were used for long-term monitoring, and quartz fibre dosimeters for short-term monitoring. However, these have been mostly superseded by thermoluminescent dosimetry (TLD) badges and electronic dosimeters. Electronic dosimeters can give an alarm warning if a preset dose threshold has been reached, enabling safer working in potentially higher radiation levels, where the received dose must be continually monitored. Workers exposed to radiation, such as radiographers , nuclear power plant workers, doctors using radiotherapy , those in laboratories using radionuclides , and HAZMAT teams are required to wear dosimeters so a record of occupational exposure can be made. Such devices are generally termed "legal dosimeters" if they have been approved for use in recording personnel dose for regulatory purposes. Dosimeters can be worn to obtain a whole body dose and there are also specialist types that can be worn on the fingers or clipped to headgear, to measure the localised body irradiation for specific activities. Common types of wearable dosimeters for ionizing radiation include: [ 16 ] [ 17 ] Almost any material can act as a shield from gamma or x-rays if used in sufficient amounts. Different types of ionizing radiation interact in different ways with shielding material. The effectiveness of shielding is dependent on stopping power , which varies with the type and energy of radiation and the shielding material used. Different shielding techniques are therefore used depending on the application and the type and energy of the radiation. Shielding reduces the intensity of radiation, increasing with thickness. This is an exponential relationship with gradually diminishing effect as equal slices of shielding material are added. A quantity known as the halving-thicknesses is used to calculate this. For example, a practical shield in a fallout shelter with ten halving-thicknesses of packed dirt, which is roughly 115 cm (3 ft 9 in), reduces gamma rays to 1/1024 of their original intensity (i.e. 2 −10 ). The effectiveness of a shielding material in general increases with its atomic number, called Z , except for neutron shielding, which is more readily shielded by the likes of neutron absorbers and moderators such as compounds of boron e.g. boric acid , cadmium , carbon and hydrogen . Graded- Z shielding is a laminate of several materials with different Z values ( atomic numbers ) designed to protect against ionizing radiation . Compared to single-material shielding, the same mass of graded- Z shielding has been shown to reduce electron penetration over 60%. [ 18 ] It is commonly used in satellite-based particle detectors, offering several benefits: Designs vary, but typically involve a gradient from high- Z (usually tantalum ) through successively lower- Z elements such as tin , steel , and copper , usually ending with aluminium . Sometimes even lighter materials such as polypropylene or boron carbide are used. [ 19 ] [ 20 ] In a typical graded- Z shield, the high- Z layer effectively scatters protons and electrons. It also absorbs gamma rays, which produces X-ray fluorescence . Each subsequent layer absorbs the X-ray fluorescence of the previous material, eventually reducing the energy to a suitable level. Each decrease in energy produces Bremsstrahlung and Auger electrons , which are below the detector's energy threshold. Some designs also include an outer layer of aluminium, which may simply be the skin of the satellite. The effectiveness of a material as a biological shield is related to its cross-section for scattering and absorption , and to a first approximation is proportional to the total mass of material per unit area interposed along the line of sight between the radiation source and the region to be protected. Hence, shielding strength or "thickness" is conventionally measured in units of g/cm 2 . The radiation that manages to get through falls exponentially with the thickness of the shield. In x-ray facilities, walls surrounding the room with the x-ray generator may contain lead shielding such as lead sheets, or the plaster may contain barium sulfate . Operators view the target through a leaded glass screen, or if they must remain in the same room as the target, wear lead aprons . Particle radiation consists of a stream of charged or neutral particles, both charged ions and subatomic elementary particles. This includes solar wind , cosmic radiation , and neutron flux in nuclear reactors . Electromagnetic radiation consists of emissions of electromagnetic waves , the properties of which depend on the wavelength . In some cases, improper shielding can actually make the situation worse, when the radiation interacts with the shielding material and creates secondary radiation that absorbs in the organisms more readily. For example, although high atomic number materials are very effective in shielding photons , using them to shield beta particles may cause higher radiation exposure due to the production of Bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using a material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present. Personal protective equipment (PPE) includes all clothing and accessories which can be worn to prevent severe illness and injury as a result of exposure to radioactive material. These include an SR100 (protection for 1hr), SR200 (protection for 2 hours). Because radiation can affect humans through internal and external contamination, various protection strategies have been developed to protect humans from the harmful effects of radiation exposure from a spectrum of sources. [ 23 ] A few of these strategies developed to shield from internal, external, and high energy radiation are outlined below. Internal contamination protection equipment protects against the inhalation and ingestion of radioactive material. Internal deposition of radioactive material result in direct exposure of radiation to organs and tissues inside the body. The respiratory protective equipment described below are designed to minimize the possibility of such material being inhaled or ingested as emergency workers are exposed to potentially radioactive environments. Reusable air purifying respirators (APR) Powered air-purifying respirator (PAPR) Supplied-air respirator (SAR) Auxiliary escape respirator Self-contained breathing apparatus (SCBA) External contamination protection equipment provides a barrier to shield radioactive material from being deposited externally on the body or clothes. The dermal protective equipment described below acts as a barrier to block radioactive material from physically touching the skin, but does not protect against externally penetrating high energy radiation. Chemical-resistant inner suit Level C equivalent: Bunker gear Level B equivalent: Non-gas-tight encapsulating suit Level A equivalent: Totally encapsulating chemical- and vapour-protective suit There are many solutions to shielding against low-energy radiation exposure like low-energy X-rays . Lead shielding wear such as lead aprons can protect patients and clinicians from the potentially harmful radiation effects of day-to-day medical examinations. It is quite feasible to protect large surface areas of the body from radiation in the lower-energy spectrum because very little shielding material is required to provide the necessary protection. Recent studies show that copper shielding is far more effective than lead and is likely to replace it as the standard material for radiation shielding. [ citation needed ] Personal shielding against more energetic radiation such as gamma radiation is very difficult to achieve as the large mass of shielding material required to properly protect the entire body would make functional movement nearly impossible. For this, partial body shielding of radio-sensitive internal organs is the most viable protection strategy. The immediate danger of intense exposure to high-energy gamma radiation is acute radiation syndrome (ARS), a result of irreversible bone marrow damage. The concept of selective shielding is based in the regenerative potential of the hematopoietic stem cells found in bone marrow. The regenerative quality of stem cells make it only necessary to protect enough bone marrow to repopulate the body with unaffected stem cells after the exposure: a similar concept which is applied in hematopoietic stem cell transplantation (HSCT), which is a common treatment for patients with leukemia. This scientific advancement allows for the development of a new class of relatively lightweight protective equipment that shields high concentrations of bone marrow to defer the hematopoietic sub-syndrome of acute radiation syndrome to much higher dosages. One technique is to apply selective shielding to protect the high concentration of bone marrow stored in the hips and other radio-sensitive organs in the abdominal area. This allows first responders a safe way to perform necessary missions in radioactive environments. [ 24 ] Practical radiation measurement using calibrated radiation protection instruments is essential in evaluating the effectiveness of protection measures, and in assessing the radiation dose likely to be received by individuals. The measuring instruments for radiation protection are both "installed" (in a fixed position) and portable (hand-held or transportable). Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne particulate monitors. The area radiation monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations that can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Gamma radiation "interlock monitors" are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. These interlock the process access directly. Airborne contamination monitors measure the concentration of radioactive particles in the ambient air to guard against radioactive particles being ingested, or deposited in the lungs of personnel. These instruments will normally give a local alarm, but are often connected to an integrated safety system so that areas of plant can be evacuated and personnel are prevented from entering an air of high airborne contamination. Personnel exit monitors (PEM) are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory publishes a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used. [ 25 ] Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations. In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. [ 26 ] This covers all radiation instrument technologies, and is a useful comparative guide. A number of commonly used detection instrument types are listed below, and are used for both fixed and survey monitoring. The following table shows the main radiation-related quantities and units. Spacecraft, both robotic and crewed, must cope with the high radiation environment of outer space. Radiation emitted by the Sun and other galactic sources , and trapped in radiation "belts" is more dangerous and hundreds of times more intense than radiation sources such as medical X-rays or normal cosmic radiation usually experienced on Earth. [ 27 ] When the intensely ionizing particles found in space strike human tissue, it can result in cell damage and may eventually lead to cancer. The usual method for radiation protection is material shielding by spacecraft and equipment structures (usually aluminium), possibly augmented by polyethylene in human spaceflight where the main concern is high-energy protons and cosmic ray ions. On uncrewed spacecraft in high-electron-dose environments such as Jupiter missions, or medium Earth orbit (MEO), additional shielding with materials of a high atomic number can be effective. On long-duration crewed missions, advantage can be taken of the good shielding characteristics of liquid hydrogen fuel and water. The NASA Space Radiation Laboratory makes use of a particle accelerator that produces beams of protons or heavy ions. These ions are typical of those accelerated in cosmic sources and by the Sun. The beams of ions move through a 100 m (328-foot) transport tunnel to the 37 m 2 (400-square-foot) shielded target hall. There, they hit the target, which may be a biological sample or shielding material. [ 27 ] In a 2002 NASA study, it was determined that materials that have high hydrogen contents, such as polyethylene , can reduce primary and secondary radiation to a greater extent than metals, such as aluminum. [ 28 ] The problem with this "passive shielding" method is that radiation interactions in the material generate secondary radiation. Active Shielding, that is, using magnets, high voltages, or artificial magnetospheres to slow down or deflect radiation, has been considered to potentially combat radiation in a feasible way. So far, the cost of equipment, power and weight of active shielding equipment outweigh their benefits. For example, active radiation equipment would need a habitable volume size to house it, and magnetic and electrostatic configurations often are not homogeneous in intensity, allowing high-energy particles to penetrate the magnetic and electric fields from low-intensity parts, like cusps in dipolar magnetic field of Earth. As of 2012, NASA is undergoing research in superconducting magnetic architecture for potential active shielding applications. [ 29 ] The dangers of radioactivity and radiation were not immediately recognized. The discovery of x‑rays in 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving x-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, a graduate of Columbia College, of his severe hand and chest burns in an x-ray demonstration, was the first of many other reports in Electrical Review . [ 30 ] Many experimenters including Elihu Thomson at Thomas Edison 's lab, William J. Morton , and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-ray tube over a period of time and experienced pain, swelling, and blistering. [ 31 ] Other effects, including ultraviolet rays and ozone were sometimes blamed for the damage. [ 32 ] Many physicists claimed that there were no effects from x-ray exposure at all. [ 31 ] As early as 1902 William Herbert Rollins wrote almost despairingly that his warnings about the dangers involved in careless use of x-rays was not being heeded, either by industry or by his colleagues. By this time Rollins had proved that x-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a fetus. [ 33 ] [ self-published source? ] He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of x-rays. Before the biological effects of radiation were known, many physicists and corporations began marketing radioactive substances as patent medicine in the form of glow-in-the-dark pigments. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died from aplastic anaemia , likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market ( radioactive quackery ).
https://en.wikipedia.org/wiki/Radiation_protection
Radiation reduced hybrid is procedure in discovering the location of genetic markers relative to one another. The relative location of these markers can be combined into a physical map or a genetic map . The radiation hybrid technique begins as another way to amplify and purify DNA , the first step in any sequencing project; radiation is used to break the DNA into pieces and these pieces are incorporated into the hybrid cell; the hybrid can be grown in large quantities. One can then check for the presence of various genetic markers using PCR and linkage analysis to resolve the distance between the markers. That is, if two genetic markers are near each other, they are less likely to be separated by the DNA-breaking radiation. The technique is similar to traditional linkage analysis, where we depend on genetic recombination to calculate the distance between two genetic markers. The procedure utilizes two cell lines, neither of which can survive in toxic media, but contain genes that can resist the toxin when contained by the same cell. The cell line under study is irradiated, causing breaks in the DNA. These cells are fused with the other cell line, producing a hybrid. If the hybrid incorporates genes from both cells, it will be able to survive in the toxic media. The cells that survive can be grown in large quantities, thus amplifying the DNA that was incorporated from the irradiated cell line. One can prepare a sample of DNA from the hybrid cell line and use PCR to amplify two specific genetic markers. By running the PCR products on a gel, one can determine if both markers are in the cell line. If both markers are usually in the hybrid, we can conclude that they are near each other.
https://en.wikipedia.org/wiki/Radiation_reduced_hybrid
Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation . [ 1 ] Examples of radiation sensitive materials are silver chloride , photoresists and biomaterials . Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire . The radiation effect depends on the type of the irradiating particles, their energy, and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered. [ 2 ] [ 3 ] This nuclear technology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiation_sensitivity
Radiation therapy or radiotherapy ( RT , RTx , or XRT ) is a treatment using ionizing radiation , generally provided as part of cancer therapy to either kill or control the growth of malignant cells . It is normally delivered by a linear particle accelerator . Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body, and have not spread to other parts . It may also be used as part of adjuvant therapy , to prevent tumor recurrence after surgery to remove a primary malignant tumor (for example, early stages of breast cancer). Radiation therapy is synergistic with chemotherapy , and has been used before, during, and after chemotherapy in susceptible cancers. The subspecialty of oncology concerned with radiotherapy is called radiation oncology. A physician who practices in this subspecialty is a radiation oncologist . Radiation therapy is commonly applied to the cancerous tumor because of its ability to control cell growth. Ionizing radiation works by damaging the DNA of cancerous tissue leading to cellular death . To spare normal tissues (such as skin or organs which radiation must pass through to treat the tumor), shaped radiation beams are aimed from several angles of exposure to intersect at the tumor, providing a much larger absorbed dose there than in the surrounding healthy tissue. Besides the tumor itself, the radiation fields may also include the draining lymph nodes if they are clinically or radiologically involved with the tumor, or if there is thought to be a risk of subclinical malignant spread. It is necessary to include a margin of normal tissue around the tumor to allow for uncertainties in daily set-up and internal tumor motion. These uncertainties can be caused by internal movement (for example, respiration and bladder filling) and movement of external skin marks relative to the tumor position. Radiation oncology is the medical specialty concerned with prescribing radiation, and is distinct from radiology , the use of radiation in medical imaging and diagnosis . Radiation may be prescribed by a radiation oncologist with intent to cure or for adjuvant therapy. It may also be used as palliative treatment (where cure is not possible and the aim is for local disease control or symptomatic relief) or as therapeutic treatment (where the therapy has survival benefit and can be curative). [ 1 ] It is also common to combine radiation therapy with surgery , chemotherapy, hormone therapy , immunotherapy or some mixture of the four. Most common cancer types can be treated with radiation therapy in some way. The precise treatment intent (curative, adjuvant, neoadjuvant therapeutic , or palliative) will depend on the tumor type, location, and stage , as well as the general health of the patient. Total body irradiation (TBI) is a radiation therapy technique used to prepare the body to receive a bone marrow transplant . Brachytherapy , in which a radioactive source is placed inside or next to the area requiring treatment, is another form of radiation therapy that minimizes exposure to healthy tissue during procedures to treat cancers of the breast, prostate, and other organs. Radiation therapy has several applications in non-malignant conditions, such as the treatment of trigeminal neuralgia , acoustic neuromas , severe thyroid eye disease , pterygium , pigmented villonodular synovitis , and prevention of keloid scar growth, vascular restenosis , and heterotopic ossification . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The use of radiation therapy in non-malignant conditions is limited partly by worries about the risk of radiation-induced cancers. It is estimated that half of the US' 1.2M invasive cancer cases diagnosed in 2022 received radiation therapy in their treatment program. [ 5 ] Different cancers respond to radiation therapy in different ways. [ 6 ] [ 7 ] [ 8 ] The response of a cancer to radiation is described by its radiosensitivity. Highly radiosensitive cancer cells are rapidly killed by modest doses of radiation. These include leukemias , most lymphomas , and germ cell tumors . The majority of epithelial cancers are only moderately radiosensitive, and require a significantly higher dose of radiation (60–70 Gy) to achieve a radical cure. Some types of cancer are notably radioresistant, that is, much higher doses are required to produce a radical cure than may be safe in clinical practice. Renal cell cancer and melanoma are generally considered to be radioresistant but radiation therapy is still a palliative option for many patients with metastatic melanoma. Combining radiation therapy with immunotherapy is an active area of investigation and has shown some promise for melanoma and other cancers. [ 9 ] It is important to distinguish the radiosensitivity of a particular tumor, which to some extent is a laboratory measure, from the radiation "curability" of a cancer in actual clinical practice. For example, leukemias are not generally curable with radiation therapy, because they are disseminated through the body. Lymphoma may be radically curable if it is localized to one area of the body. Similarly, many of the common, moderately radioresponsive tumors are routinely treated with curative doses of radiation therapy if they are at an early stage. For example, non-melanoma skin cancer , head and neck cancer , breast cancer , non-small cell lung cancer , cervical cancer , anal cancer , and prostate cancer . With the exception of oligometastatic disease, metastatic cancers are incurable with radiation therapy because it is not possible to treat the whole body. [ citation needed ] Modern radiation therapy relies on a CT scan to identify the tumor and surrounding normal structures and to perform dose calculations for the creation of a complex radiation treatment plan. The patient receives small skin marks to guide the placement of treatment fields. [ 10 ] Patient positioning is crucial at this stage as the patient will have to be placed in an identical position during each treatment. Many patient positioning devices have been developed for this purpose, including masks and cushions which can be molded to the patient. Image-guided radiation therapy is a method that uses imaging to correct for positional errors of each treatment session. [ citation needed ] Building on the principles of Image-guided radiation therapy , Daily MR-guided ART (MRgART) offers many dosimetric advantages over the traditional single-plan RT workflow, including the ability to conform the high-dose region to the tumor as the anatomy changes throughout the course of RT. [ 11 ] [ 12 ] [ 13 ] The response of a tumor to radiation therapy is also related to its size. Due to complex radiobiology , very large tumors are affected less by radiation compared to smaller tumors or microscopic disease. Various strategies are used to overcome this effect. The most common technique is surgical resection prior to radiation therapy. This is most commonly seen in the treatment of breast cancer with wide local excision or mastectomy followed by adjuvant radiation therapy . Another method is to shrink the tumor with neoadjuvant chemotherapy prior to radical radiation therapy. A third technique is to enhance the radiosensitivity of the cancer by giving certain drugs during a course of radiation therapy. Examples of radiosensitizing drugs include cisplatin , nimorazole , and cetuximab . [ 14 ] The impact of radiotherapy varies between different types of cancer and different groups. [ 15 ] For example, for breast cancer after breast-conserving surgery , radiotherapy has been found to halve the rate at which the disease recurs. [ 16 ] In pancreatic cancer, radiotherapy has increased survival times for inoperable tumors. [ 17 ] Radiation therapy (RT) is in itself painless, but has iatrogenic side effect risks. Many low-dose palliative treatments (for example, radiation therapy to bony metastases ) cause minimal or no side effects, although short-term pain flare-up can be experienced in the days following treatment due to oedema compressing nerves in the treated area. Higher doses can cause varying side effects during treatment (acute side effects), in the months or years following treatment (long-term side effects), or after re-treatment (cumulative side effects). The nature, severity, and longevity of side effects depends on the organs that receive the radiation, the treatment itself (type of radiation, dose, fractionation , concurrent chemotherapy), and the patient. Serious radiation complications may occur in 5% of RT cases. Acute (near immediate) or sub-acute (2 to 3 months post RT) radiation side effects may develop after 50 Gy RT dosing. Late or delayed radiation injury (6 months to decades) may develop after 65 Gy. [ 5 ] Most side effects are predictable and expected. Side effects from radiation are usually limited to the area of the patient's body that is under treatment. Side effects are dose-dependent; for example, higher doses of head and neck radiation can be associated with cardiovascular complications, thyroid dysfunction, and pituitary axis dysfunction. [ 18 ] Modern radiation therapy aims to reduce side effects to a minimum and to help the patient understand and deal with side effects that are unavoidable. The main side effects reported are fatigue and skin irritation, like a mild to moderate sun burn. The fatigue often sets in during the middle of a course of treatment and can last for weeks after treatment ends. The irritated skin will heal, but may not be as elastic as it was before. [ 19 ] Late side effects occur months to years after treatment and are generally limited to the area that has been treated. They are often due to damage of blood vessels and connective tissue cells. Many late effects are reduced by fractionating treatment into smaller parts. Cumulative effects from this process should not be confused with long-term effects – when short-term effects have disappeared and long-term effects are subclinical, reirradiation can still be problematic. [ 50 ] These doses are calculated by the radiation oncologist and many factors are taken into account before the subsequent radiation takes place. During the first two weeks after fertilization , radiation therapy is lethal but not teratogenic . [ 51 ] High doses of radiation during pregnancy induce anomalies , impaired growth and intellectual disability , and there may be an increased risk of childhood leukemia and other tumors in the offspring. [ 51 ] In males previously having undergone radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. [ 51 ] However, the use of assisted reproductive technologies and micromanipulation techniques might increase this risk. [ 51 ] Hypopituitarism commonly develops after radiation therapy for sellar and parasellar neoplasms, extrasellar brain tumors, head and neck tumors, and following whole body irradiation for systemic malignancies. [ 52 ] 40–50% of children treated for childhood cancer develop some endocrine side effect. [ 53 ] Radiation-induced hypopituitarism mainly affects growth hormone and gonadal hormones . [ 52 ] In contrast, adrenocorticotrophic hormone (ACTH) and thyroid stimulating hormone (TSH) deficiencies are the least common among people with radiation-induced hypopituitarism. [ 52 ] Changes in prolactin -secretion is usually mild, and vasopressin deficiency appears to be very rare as a consequence of radiation. [ 52 ] Delayed tissue injury with impaired wound healing capability often develops after receiving doses in excess of 65 Gy. A diffuse injury pattern due to the external beam radiotherapy 's holographic isodosing occurs. While the targeted tumor receives the majority of radiation, healthy tissue at incremental distances from the center of the tumor are also irradiated in a diffuse pattern due to beam divergence. These wounds demonstrate progressive, proliferative endarteritis , inflamed arterial linings that disrupt the tissue's blood supply. Such tissue ends up chronically hypoxic , fibrotic , and without an adequate nutrient and oxygen supply. Surgery of previously irradiated tissue has a very high failure rate, e.g. women who have received radiation for breast cancer develop late effect chest wall tissue fibrosis and hypovascularity, making successful reconstruction and healing difficult, if not impossible. [ 5 ] There are rigorous procedures in place to minimise the risk of accidental overexposure of radiation therapy to patients. However, mistakes do occasionally occur; for example, the radiation therapy machine Therac-25 was responsible for at least six accidents between 1985 and 1987, where patients were given up to one hundred times the intended dose; two people were killed directly by the radiation overdoses. From 2005 to 2010, a hospital in Missouri overexposed 76 patients (most with brain cancer) during a five-year period because new radiation equipment had been set up incorrectly. [ 54 ] Although medical errors are exceptionally rare, radiation oncologists, medical physicists and other members of the radiation therapy treatment team are working to eliminate them. In 2010 the American Society for Radiation Oncology (ASTRO) launched a safety initiative called Target Safely that, among other things, aimed to record errors nationwide so that doctors can learn from each and every mistake and prevent them from recurring. ASTRO also publishes a list of questions for patients to ask their doctors about radiation safety to ensure every treatment is as safe as possible. [ 55 ] Radiation therapy is used to treat early stage Dupuytren's disease and Ledderhose disease . When Dupuytren's disease is at the nodules and cords stage or fingers are at a minimal deformation stage of less than 10 degrees, then radiation therapy is used to prevent further progress of the disease. Radiation therapy is also used post surgery in some cases to prevent the disease continuing to progress. Low doses of radiation are used typically three gray of radiation for five days, with a break of three months followed by another phase of three gray of radiation for five days. [ 56 ] Radiation therapy works by damaging the DNA of cancer cells and can cause them to undergo mitotic catastrophe . [ 57 ] This DNA damage is caused by one of two types of energy, photon or charged particle . This damage is either direct or indirect ionization of the atoms which make up the DNA chain. Indirect ionization happens as a result of the ionization of water, forming free radicals , notably hydroxyl radicals, which then damage the DNA. In photon therapy, most of the radiation effect is through free radicals. Cells have mechanisms for repairing single-strand DNA damage and double-stranded DNA damage. However, double-stranded DNA breaks are much more difficult to repair, and can lead to dramatic chromosomal abnormalities and genetic deletions. Targeting double-stranded breaks increases the probability that cells will undergo cell death . Cancer cells are generally less differentiated and more stem cell -like; they reproduce more than most healthy differentiated cells, and have a diminished ability to repair sub-lethal damage. Single-strand DNA damage is then passed on through cell division; damage to the cancer cells' DNA accumulates, causing them to die or reproduce more slowly. One of the major limitations of photon radiation therapy is that the cells of solid tumors become deficient in oxygen . Solid tumors can outgrow their blood supply, causing a low-oxygen state known as hypoxia . Oxygen is a potent radiosensitizer , increasing the effectiveness of a given dose of radiation by forming DNA-damaging free radicals. Tumor cells in a hypoxic environment may be as much as 2 to 3 times more resistant to radiation damage than those in a normal oxygen environment. [ 58 ] Much research has been devoted to overcoming hypoxia including the use of high pressure oxygen tanks, hyperthermia therapy (heat therapy which dilates blood vessels to the tumor site), blood substitutes that carry increased oxygen, hypoxic cell radiosensitizer drugs such as misonidazole and metronidazole , and hypoxic cytotoxins (tissue poisons), such as tirapazamine . Newer research approaches are currently being studied, including preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as a radiosensitizer. [ 59 ] Charged particles such as protons and boron , carbon , and neon ions can cause direct damage to cancer cell DNA through high-LET ( linear energy transfer ) and have an antitumor effect independent of tumor oxygen supply because these particles act mostly via direct energy transfer usually causing double-stranded DNA breaks. Due to their relatively large mass, protons and other charged particles have little lateral side scatter in the tissue – the beam does not broaden much, stays focused on the tumor shape, and delivers small dose side-effects to surrounding tissue. They also more precisely target the tumor using the Bragg peak effect. See proton therapy for a good example of the different effects of intensity-modulated radiation therapy (IMRT) vs. charged particle therapy . This procedure reduces damage to healthy tissue between the charged particle radiation source and the tumor and sets a finite range for tissue damage after the tumor has been reached. In contrast, IMRT's use of uncharged particles causes its energy to damage healthy cells when it exits the body. This exiting damage is not therapeutic, can increase treatment side effects, and increases the probability of secondary cancer induction. [ 60 ] This difference is very important in cases where the close proximity of other organs makes any stray ionization very damaging (example: head and neck cancers ). This X-ray exposure is especially bad for children, due to their growing bodies, and while depending on a multitude of factors, they are around 10 times more sensitive to developing secondary malignancies after radiotherapy as compared to adults. [ 61 ] The amount of radiation used in photon radiation therapy is measured in grays (Gy), and varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers.) Many other factors are considered by radiation oncologists when selecting a dose, including whether the patient is receiving chemotherapy, patient comorbidities, whether radiation therapy is being administered before or after surgery, and the degree of success of surgery. Delivery parameters of a prescribed dose are determined during treatment planning (part of dosimetry ). Treatment planning is generally performed on dedicated computers using specialized treatment planning software. Depending on the radiation delivery method, several angles or sources may be used to sum to the total necessary dose. The planner will try to design a plan that delivers a uniform prescription dose to the tumor and minimizes dose to surrounding healthy tissues. In radiation therapy, three-dimensional dose distributions may be evaluated using the dosimetry technique known as gel dosimetry . [ 62 ] The total dose is fractionated (spread out over time) for several important reasons. Fractionation allows normal cells time to recover, while tumor cells are generally less efficient in repair between fractions. Fractionation also allows tumor cells that were in a relatively radio-resistant phase of the cell cycle during one treatment to cycle into a sensitive phase of the cycle before the next fraction is given. Similarly, tumor cells that were chronically or acutely hypoxic (and therefore more radioresistant) may reoxygenate between fractions, improving the tumor cell kill. [ 63 ] Fractionation regimens are individualised between different radiation therapy centers and even between individual doctors. In North America, Australia, and Europe, the typical fractionation schedule for adults is 1.8 to 2 Gy per day, five days a week. In some cancer types, prolongation of the fraction schedule over too long can allow for the tumor to begin repopulating, and for these tumor types, including head-and-neck and cervical squamous cell cancers, radiation treatment is preferably completed within a certain amount of time. For children, a typical fraction size may be 1.5 to 1.8 Gy per day, as smaller fraction sizes are associated with reduced incidence and severity of late-onset side effects in normal tissues. In some cases, two fractions per day are used near the end of a course of treatment. This schedule, known as a concomitant boost regimen or hyperfractionation, is used on tumors that regenerate more quickly when they are smaller. In particular, tumors in the head-and-neck demonstrate this behavior. Patients receiving palliative radiation to treat uncomplicated painful bone metastasis should not receive more than a single fraction of radiation. [ 64 ] A single treatment gives comparable pain relief and morbidity outcomes to multiple-fraction treatments, and for patients with limited life expectancy, a single treatment is best to improve patient comfort. [ 64 ] One fractionation schedule that is increasingly being used and continues to be studied is hypofractionation. This is a radiation treatment in which the total dose of radiation is divided into large doses. Typical doses vary significantly by cancer type, from 2.2 Gy/fraction to 20 Gy/fraction, the latter being typical of stereotactic treatments (stereotactic ablative body radiotherapy, or SABR – also known as SBRT, or stereotactic body radiotherapy) for subcranial lesions, or SRS (stereotactic radiosurgery) for intracranial lesions. The rationale of hypofractionation is to reduce the probability of local recurrence by denying clonogenic cells the time they require to reproduce and also to exploit the radiosensitivity of some tumors. [ 65 ] In particular, stereotactic treatments are intended to destroy clonogenic cells by a process of ablation, i.e., the delivery of a dose intended to destroy clonogenic cells directly, rather than to interrupt the process of clonogenic cell division repeatedly (apoptosis), as in routine radiotherapy. Different cancer types have different radiation sensitivity. While predicting the sensitivity based on genomic or proteomic analyses of biopsy samples has proven challenging, [ 66 ] [ 67 ] the predictions of radiation effect on individual patients from genomic signatures of intrinsic cellular radiosensitivity have been shown to associate with clinical outcome. [ 68 ] An alternative approach to genomics and proteomics was offered by the discovery that radiation protection in microbes is offered by non-enzymatic complexes of manganese and small organic metabolites. [ 69 ] The content and variation of manganese (measurable by electron paramagnetic resonance) were found to be good predictors of radiosensitivity , and this finding extends also to human cells. [ 70 ] An association was confirmed between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients. [ 71 ] Historically, the three main divisions of radiation therapy are: The differences relate to the position of the radiation source; external is outside the body, brachytherapy uses sealed radioactive sources placed precisely in the area under treatment, and systemic radioisotopes are given by infusion or oral ingestion. Brachytherapy can use temporary or permanent placement of radioactive sources. The temporary sources are usually placed by a technique called afterloading. In afterloading a hollow tube or applicator is placed surgically in the organ to be treated, and the sources are loaded into the applicator after the applicator is implanted. This minimizes radiation exposure to health care personnel. Particle therapy is a special case of external beam radiation therapy where the particles are protons or heavier ions . A review of radiation therapy randomised clinical trials from 2018 to 2021 found many practice-changing data and new concepts that emerge from RCTs, identifying techniques that improve the therapeutic ratio, techniques that lead to more tailored treatments, stressing the importance of patient satisfaction, and identifying areas that require further study. [ 72 ] [ 73 ] The following three sections refer to treatment using X-rays. Historically conventional external beam radiation therapy (2DXRT) was delivered via two-dimensional beams using kilovoltage therapy X-ray units, medical linear accelerators that generate high-energy X-rays, or with machines that were similar to a linear accelerator in appearance, but used a sealed radioactive source like the one shown above. [ 74 ] [ 75 ] 2DXRT mainly consists of a single beam of radiation delivered to the patient from several directions: often front or back, and both sides. Conventional refers to the way the treatment is planned or simulated on a specially calibrated diagnostic X-ray machine known as a simulator because it recreates the linear accelerator actions (or sometimes by eye), and to the usually well-established arrangements of the radiation beams to achieve a desired plan . The aim of simulation is to accurately target or localize the volume which is to be treated. This technique is well established and is generally quick and reliable. The worry is that some high-dose treatments may be limited by the radiation toxicity capacity of healthy tissues which lie close to the target tumor volume. An example of this problem is seen in radiation of the prostate gland, where the sensitivity of the adjacent rectum limited the dose which could be safely prescribed using 2DXRT planning to such an extent that tumor control may not be easily achievable. Prior to the invention of the CT, physicians and physicists had limited knowledge about the true radiation dosage delivered to both cancerous and healthy tissue. For this reason, 3-dimensional conformal radiation therapy has become the standard treatment for almost all tumor sites. More recently other forms of imaging are used including MRI, PET, SPECT and Ultrasound. [ 76 ] Stereotactic radiation is a specialized type of external beam radiation therapy. It uses focused radiation beams targeting a well-defined tumor using extremely detailed imaging scans. Radiation oncologists perform stereotactic treatments, often with the help of a neurosurgeon for tumors in the brain or spine. There are two types of stereotactic radiation. Stereotactic radiosurgery (SRS) is when doctors use a single or several stereotactic radiation treatments of the brain or spine. Stereotactic body radiation therapy (SBRT) refers to one or several stereotactic radiation treatments with the body, such as the lungs. [ 77 ] Some doctors say an advantage to stereotactic treatments is that they deliver the right amount of radiation to the cancer in a shorter amount of time than traditional treatments, which can often take 6 to 11 weeks. Plus treatments are given with extreme accuracy, which should limit the effect of the radiation on healthy tissues. One problem with stereotactic treatments is that they are only suitable for certain small tumors. Stereotactic treatments can be confusing because many hospitals call the treatments by the name of the manufacturer rather than calling it SRS or SBRT. Brand names for these treatments include Axesse, Cyberknife , Gamma Knife , Novalis, Primatom, Synergy, X-Knife , TomoTherapy , Trilogy and Truebeam . [ 78 ] This list changes as equipment manufacturers continue to develop new, specialized technologies to treat cancers. The planning of radiation therapy treatment has been revolutionized by the ability to delineate tumors and adjacent normal structures in three dimensions using specialized CT and/or MRI scanners and planning software. [ 79 ] Virtual simulation, the most basic form of planning, allows more accurate placement of radiation beams than is possible using conventional X-rays, where soft-tissue structures are often difficult to assess and normal tissues difficult to protect. An enhancement of virtual simulation is 3-dimensional conformal radiation therapy (3DCRT) , in which the profile of each radiation beam is shaped to fit the profile of the target from a beam's eye view (BEV) using a multileaf collimator (MLC) and a variable number of beams. When the treatment volume conforms to the shape of the tumor, the relative toxicity of radiation to the surrounding normal tissues is reduced, allowing a higher dose of radiation to be delivered to the tumor than conventional techniques would allow. [ 10 ] Intensity-modulated radiation therapy (IMRT) is an advanced type of high-precision radiation that is the next generation of 3DCRT. [ 80 ] IMRT also improves the ability to conform the treatment volume to concave tumor shapes, [ 10 ] for example when the tumor is wrapped around a vulnerable structure such as the spinal cord or a major organ or blood vessel. [ 81 ] Computer-controlled X-ray accelerators distribute precise radiation doses to malignant tumors or specific areas within the tumor. The pattern of radiation delivery is determined using highly tailored computing applications to perform optimization and treatment simulation ( Treatment Planning ). The radiation dose is consistent with the 3-D shape of the tumor by controlling, or modulating, the radiation beam's intensity. The radiation dose intensity is elevated near the gross tumor volume while radiation among the neighboring normal tissues is decreased or avoided completely. This results in better tumor targeting, lessened side effects, and improved treatment outcomes than even 3DCRT. 3DCRT is still used extensively for many body sites but the use of IMRT is growing in more complicated body sites such as CNS, head and neck, prostate, breast, and lung. Unfortunately, IMRT is limited by its need for additional time from experienced medical personnel. This is because physicians must manually delineate the tumors one CT image at a time through the entire disease site which can take much longer than 3DCRT preparation. Then, medical physicists and dosimetrists must be engaged to create a viable treatment plan. Also, the IMRT technology has only been used commercially since the late 1990s even at the most advanced cancer centers, so radiation oncologists who did not learn it as part of their residency programs must find additional sources of education before implementing IMRT. Proof of improved survival benefit from either of these two techniques over conventional radiation therapy (2DXRT) is growing for many tumor sites, but the ability to reduce toxicity is generally accepted. This is particularly the case for head and neck cancers in a series of pivotal trials performed by Professor Christopher Nutting of the Royal Marsden Hospital. Both techniques enable dose escalation, potentially increasing usefulness. There has been some concern, particularly with IMRT, [ 82 ] about increased exposure of normal tissue to radiation and the consequent potential for secondary malignancy. Overconfidence in the accuracy of imaging may increase the chance of missing lesions that are invisible on the planning scans (and therefore not included in the treatment plan) or that move between or during a treatment (for example, due to respiration or inadequate patient immobilization). New techniques are being developed to better control this uncertainty – for example, real-time imaging combined with real-time adjustment of the therapeutic beams. This new technology is called image-guided radiation therapy or four-dimensional radiation therapy. Another technique is the real-time tracking and localization of one or more small implantable electric devices implanted inside or close to the tumor. There are various types of medical implantable devices that are used for this purpose. It can be a magnetic transponder which senses the magnetic field generated by several transmitting coils, and then transmits the measurements back to the positioning system to determine the location. [ 83 ] The implantable device can also be a small wireless transmitter sending out an RF signal which then will be received by a sensor array and used for localization and real-time tracking of the tumor position. [ 84 ] [ 85 ] A well-studied issue with IMRT is the "tongue and groove effect" which results in unwanted underdosing, due to irradiating through extended tongues and grooves of overlapping MLC (multileaf collimator) leaves. [ 86 ] While solutions to this issue have been developed, which either reduce the TG effect to negligible amounts or remove it completely, they depend upon the method of IMRT being used and some of them carry costs of their own. [ 86 ] Some texts distinguish "tongue and groove error" from "tongue or groove error", according as both or one side of the aperture is occluded. [ 87 ] Volumetric modulated arc therapy (VMAT) is a radiation technique introduced in 2007 [ 88 ] which can achieve highly conformal dose distributions on target volume coverage and sparing of normal tissues. The specificity of this technique is to modify three parameters during the treatment. VMAT delivers radiation by rotating gantry (usually 360° rotating fields with one or more arcs), changing speed and shape of the beam with a multileaf collimator (MLC) ("sliding window" system of moving) and fluence output rate (dose rate) of the medical linear accelerator. VMAT has an advantage in patient treatment, compared with conventional static field intensity modulated radiotherapy (IMRT), of reduced radiation delivery times. [ 89 ] [ 90 ] Comparisons between VMAT and conventional IMRT for their sparing of healthy tissues and Organs at Risk (OAR) depends upon the cancer type. In the treatment of nasopharyngeal , oropharyngeal and hypopharyngeal carcinomas VMAT provides equivalent or better protection of the organ at risk (OAR). [ 88 ] [ 89 ] [ 90 ] In the treatment of prostate cancer the OAR protection result is mixed [ 88 ] with some studies favoring VMAT, others favoring IMRT. [ 91 ] Temporally feathered radiation therapy (TFRT) is a radiation technique introduced in 2018 [ 92 ] which aims to use the inherent non-linearities in normal tissue repair to allow for sparing of these tissues without affecting the dose delivered to the tumor. The application of this technique, which has yet to be automated, has been described carefully to enhance the ability of departments to perform it, and in 2021 it was reported as feasible in a small clinical trial, [ 93 ] though its efficacy has yet to be formally studied. Automated treatment planning has become an integrated part of radiotherapy treatment planning. There are in general two approaches of automated planning. 1) Knowledge based planning where the treatment planning system has a library of high quality plans, from which it can predict the target and dose-volume histogram of the organ at risk. [ 94 ] 2) The other approach is commonly called protocol based planning, where the treatment planning system tried to mimic an experienced treatment planner and through an iterative process evaluates the plan quality from on the basis of the protocol. [ 95 ] [ 96 ] [ 97 ] [ 98 ] In particle therapy ( proton therapy being one example), energetic ionizing particles (protons or carbon ions) are directed at the target tumor. [ 99 ] The dose increases while the particle penetrates the tissue, up to a maximum (the Bragg peak ) that occurs near the end of the particle's range , and it then drops to (almost) zero. The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue. Auger therapy (AT) makes use of a very high dose [ 100 ] of ionizing radiation in situ that provides molecular modifications at an atomic scale. AT differs from conventional radiation therapy in several aspects; it neither relies upon radioactive nuclei to cause cellular radiation damage at a cellular dimension, nor engages multiple external pencil-beams from different directions to zero-in to deliver a dose to the targeted area with reduced dose outside the targeted tissue/organ locations. Instead, the in situ delivery of a very high dose at the molecular level using AT aims for in situ molecular modifications involving molecular breakages and molecular re-arrangements such as a change of stacking structures as well as cellular metabolic functions related to the said molecule structures. In many types of external beam radiotherapy, motion can negatively impact the treatment delivery by moving target tissue out of, or other healthy tissue into, the intended beam path. Some form of patient immobilisation is common, to prevent the large movements of the body during treatment, however this cannot prevent all motion, for example as a result of breathing . Several techniques have been developed to account for motion like this. [ 101 ] [ 102 ] Deep inspiration breath-hold (DIBH) is commonly used for breast treatments where it is important to avoid irradiating the heart. In DIBH the patient holds their breath after breathing in to provide a stable position for the treatment beam to be turned on. This can be done automatically using an external monitoring system such as a spirometer or a camera and markers. [ 103 ] The same monitoring techniques, as well as 4DCT imaging, can also be for respiratory gated treatment, where the patient breathes freely and the beam is only engaged at certain points in the breathing cycle. [ 104 ] Other techniques include using 4DCT imaging to plan treatments with margins that account for motion, and active movement of the treatment couch, or beam, to follow motion. [ 105 ] Contact X-ray brachytherapy (also called "CXB", "electronic brachytherapy" or the "Papillon Technique") is a type of radiation therapy using low energy (50 kVp) kilovoltage X-rays applied directly to the tumor to treat rectal cancer . The process involves endoscopic examination first to identify the tumor in the rectum and then inserting treatment applicator on the tumor through the anus into the rectum and placing it against the cancerous tissue. Finally, treatment tube is inserted into the applicator to deliver high doses of X-rays (30Gy) emitted directly onto the tumor at two weekly intervals for three times over four weeks period. It is typically used for treating early rectal cancer in patients who may not be candidates for surgery. [ 106 ] [ 107 ] [ 108 ] A 2015 NICE review found the main side effect to be bleeding that occurred in about 38% of cases, and radiation-induced ulcer which occurred in 27% of cases. [ 106 ] Brachytherapy is delivered by placing radiation source(s) inside or next to the area requiring treatment. Brachytherapy is commonly used as an effective treatment for cervical, [ 109 ] prostate, [ 110 ] breast, [ 111 ] and skin cancer [ 112 ] and can also be used to treat tumors in many other body sites. [ 113 ] In brachytherapy, radiation sources are precisely placed directly at the site of the cancerous tumor. This means that the irradiation only affects a very localized area – exposure to radiation of healthy tissues further away from the sources is reduced. These characteristics of brachytherapy provide advantages over external beam radiation therapy – the tumor can be treated with very high doses of localized radiation, whilst reducing the probability of unnecessary damage to surrounding healthy tissues. [ 113 ] [ 114 ] A course of brachytherapy can often be completed in less time than other radiation therapy techniques. This can help reduce the chance of surviving cancer cells dividing and growing in the intervals between each radiation therapy dose. [ 114 ] As one example of the localized nature of breast brachytherapy, the SAVI device delivers the radiation dose through multiple catheters, each of which can be individually controlled. This approach decreases the exposure of healthy tissue and resulting side effects, compared both to external beam radiation therapy and older methods of breast brachytherapy. [ 115 ] Radionuclide therapy (also known as systemic radioisotope therapy, radiopharmaceutical therapy, or molecular radiotherapy), is a form of targeted therapy. Targeting can be due to the chemical properties of the isotope such as radioiodine which is specifically absorbed by the thyroid gland a thousandfold better than other bodily organs. Targeting can also be achieved by attaching the radioisotope to another molecule or antibody to guide it to the target tissue. The radioisotopes are delivered through infusion (into the bloodstream) or ingestion. Examples are the infusion of metaiodobenzylguanidine (MIBG) to treat neuroblastoma , of oral iodine-131 to treat thyroid cancer or thyrotoxicosis , and of hormone-bound lutetium-177 and yttrium-90 to treat neuroendocrine tumors ( peptide receptor radionuclide therapy ). Another example is the injection of radioactive yttrium-90 or holmium-166 microspheres into the hepatic artery to radioembolize liver tumors or liver metastases. These microspheres are used for the treatment approach known as selective internal radiation therapy . The microspheres are approximately 30 μm in diameter (about one-third of a human hair) and are delivered directly into the artery supplying blood to the tumors. These treatments begin by guiding a catheter up through the femoral artery in the leg, navigating to the desired target site and administering treatment. The blood feeding the tumor will carry the microspheres directly to the tumor enabling a more selective approach than traditional systemic chemotherapy. There are currently three different kinds of microspheres: SIR-Spheres , TheraSphere and QuiremSpheres. A major use of systemic radioisotope therapy is in the treatment of bone metastasis from cancer. The radioisotopes travel selectively to areas of damaged bone, and spare normal undamaged bone. Isotopes commonly used in the treatment of bone metastasis are radium-223 , [ 116 ] strontium-89 and samarium ( 153 Sm) lexidronam . [ 117 ] In 2002, the United States Food and Drug Administration (FDA) approved ibritumomab tiuxetan (Zevalin), which is an anti- CD20 monoclonal antibody conjugated to yttrium-90. [ 118 ] In 2003, the FDA approved the tositumomab /iodine ( 131 I) tositumomab regimen (Bexxar), which is a combination of an iodine-131 labelled and an unlabelled anti-CD20 monoclonal antibody. [ 119 ] These medications were the first agents of what is known as radioimmunotherapy , and they were approved for the treatment of refractory non-Hodgkin's lymphoma . Intraoperative radiation therapy (IORT) is applying therapeutic levels of radiation to a target area, such as a cancer tumor, while the area is exposed during surgery . [ 120 ] The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid. [ 121 ] Medicine has used radiation therapy as a treatment for cancer for more than 100 years, with its earliest roots traced from the discovery of X-rays in 1895 by Wilhelm Röntgen . [ 122 ] Emil Grubbe of Chicago was possibly the first American physician to use X-rays to treat cancer, beginning in 1896. [ 123 ] The field of radiation therapy began to grow in the early 1900s largely due to the groundbreaking work of Nobel Prize –winning scientist Marie Curie (1867–1934), who discovered the radioactive elements polonium and radium in 1898. This began a new era in medical treatment and research. [ 122 ] Through the 1920s the hazards of radiation exposure were not understood, and little protection was used. Radium was believed to have wide curative powers and radiotherapy was applied to many diseases. Prior to World War 2, the only practical sources of radiation for radiotherapy were radium, its "emanation", radon gas, and the X-ray tube . External beam radiotherapy (teletherapy) began at the turn of the century with relatively low voltage (<150 kV) X-ray machines. It was found that while superficial tumors could be treated with low voltage X-rays, more penetrating, higher energy beams were required to reach tumors inside the body, requiring higher voltages. Orthovoltage X-rays , which used tube voltages of 200-500 kV, began to be used during the 1920s. To reach the most deeply buried tumors without exposing intervening skin and tissue to dangerous radiation doses required rays with energies of 1 MV or above, called "megavolt" radiation. Producing megavolt X-rays required voltages on the X-ray tube of 3 to 5 million volts , which required huge expensive installations. Megavoltage X-ray units were first built in the late 1930s but because of cost were limited to a few institutions. One of the first, installed at St. Bartholomew's hospital , London in 1937 and used until 1960, used a 30 foot long X-ray tube and weighed 10 tons. Radium produced megavolt gamma rays , but was extremely rare and expensive due to its low occurrence in ores. In 1937 the entire world supply of radium for radiotherapy was 50 grams, valued at £800,000, or $50 million in 2005 dollars. The invention of the nuclear reactor in the Manhattan Project during World War 2 made possible the production of artificial radioisotopes for radiotherapy. Cobalt therapy , teletherapy machines using megavolt gamma rays emitted by cobalt-60 , a radioisotope produced by irradiating ordinary cobalt metal in a reactor, revolutionized the field between the 1950s and the early 1980s. Cobalt machines were relatively cheap, robust and simple to use, although due to its 5.27 year half-life the cobalt had to be replaced about every 5 years. Medical linear particle accelerators , developed since the 1940s, began replacing X-ray and cobalt units in the 1980s and these older therapies are now declining. The first medical linear accelerator was used at the Hammersmith Hospital in London in 1953. [ 75 ] Linear accelerators can produce higher energies, have more collimated beams, and do not produce radioactive waste with its attendant disposal problems like radioisotope therapies. With Godfrey Hounsfield 's invention of computed tomography (CT) in 1971, three-dimensional planning became a possibility and created a shift from 2-D to 3-D radiation delivery. CT-based planning allows physicians to more accurately determine the dose distribution using axial tomographic images of the patient's anatomy. The advent of new imaging technologies, including magnetic resonance imaging (MRI) in the 1970s and positron emission tomography (PET) in the 1980s, has moved radiation therapy from 3-D conformal to intensity-modulated radiation therapy (IMRT) and to image-guided radiation therapy tomotherapy . These advances allowed radiation oncologists to better see and target tumors, which have resulted in better treatment outcomes, more organ preservation and fewer side effects. [ 124 ] While access to radiotherapy is improving globally, more than half of patients in low and middle income countries still do not have available access to the therapy as of 2017. [ 125 ]
https://en.wikipedia.org/wiki/Radiation_therapy
In the study of heat transfer , radiative cooling [ 1 ] [ 2 ] [ 3 ] is the process by which a body loses heat by thermal radiation . As Planck's law describes, every physical body spontaneously and continuously emits electromagnetic radiation . Radiative cooling has been applied in various contexts throughout human history, including ice making in India and Iran , [ 4 ] heat shields for spacecraft, [ 5 ] and in architecture. In 2014, a scientific breakthrough in the use of photonic metamaterials made daytime radiative cooling possible. [ 6 ] [ 7 ] It has since been proposed as a strategy to mitigate local and global warming caused by greenhouse gas emissions known as passive daytime radiative cooling . [ 8 ] Infrared radiation can pass through dry, clear air in the wavelength range of 8–13 μm. Materials that can absorb energy and radiate it in those wavelengths exhibit a strong cooling effect. Materials that can also reflect 95% or more of sunlight in the 200 nanometres to 2.5 μm range can exhibit cooling even in direct sunlight. [ 9 ] The Earth-atmosphere system is radiatively cooled, emitting long-wave ( infrared ) radiation which balances the absorption of short-wave (visible light) energy from the sun. Convective transport of heat, and evaporative transport of latent heat are both important in removing heat from the surface and distributing it in the atmosphere. Pure radiative transport is more important higher up in the atmosphere. Diurnal and geographical variation further complicate the picture. The large-scale circulation of the Earth's atmosphere is driven by the difference in absorbed solar radiation per square meter, as the sun heats the Earth more in the Tropics , mostly because of geometrical factors. The atmospheric and oceanic circulation redistributes some of this energy as sensible heat and latent heat partly via the mean flow and partly via eddies, known as cyclones in the atmosphere. Thus the tropics radiate less to space than they would if there were no circulation, and the poles radiate more; however in absolute terms the tropics radiate more energy to space. Radiative cooling is commonly experienced on cloudless nights, when heat is radiated into outer space from Earth's surface, or from the skin of a human observer. The effect is well known among amateur astronomers . The effect can be experienced by comparing skin temperature from looking straight up into a cloudless night sky for several seconds, to that after placing a sheet of paper between the face and the sky. Since outer space radiates at about a temperature of 3 K (−270.15 °C ; −454.27 °F ), and the sheet of paper radiates at about 300 K (27 °C; 80 °F) (around room temperature ), the sheet of paper radiates more heat to the face than does the darkened cosmos. The effect is blunted by Earth's surrounding atmosphere, and particularly the water vapor it contains, so the apparent temperature of the sky is far warmer than outer space. The sheet does not block the cold, but instead reflects heat to the face and radiates the heat of the face that it just absorbed. The same radiative cooling mechanism can cause frost or black ice to form on surfaces exposed to the clear night sky, even when the ambient temperature does not fall below freezing. The term radiative cooling is generally used for local processes, though the same principles apply to cooling over geological time, which was first used by Kelvin to estimate the age of the Earth (although his estimate ignored the substantial heat released by radioisotope decay, not known at the time, and the effects of convection in the mantle). Radiative cooling is one of the few ways an object in space can give off energy. In particular, white dwarf stars are no longer generating energy by fusion or gravitational contraction, and have no solar wind. So the only way their temperature changes is by radiative cooling. This makes their temperature as a function of age very predictable, so by observing the temperature, astronomers can deduce the age of the star. [ 10 ] [ 11 ] It has been proposed as a method of reducing temperature increases caused by greenhouse gases by reducing the energy needed for air conditioning , [ 18 ] [ 19 ] lowering the urban heat island effect , [ 20 ] [ 21 ] and lowering human body temperatures . [ 22 ] [ 12 ] [ 23 ] [ 24 ] [ 18 ] Cool roofs combine high solar reflectance with high infrared emittance , thereby simultaneously reducing heat gain from the sun and increasing heat removal through radiation. Radiative cooling thus offers potential for passive cooling for residential and commercial buildings. Traditional building surfaces, such as paint coatings, brick and concrete have high emittances of up to 0.96. [ 26 ] They radiate heat into the sky to passively cool buildings at night. If made sufficiently reflective to sunlight, these materials can also achieve radiative cooling during the day. The most common radiative coolers found on buildings are white cool-roof paint coatings, which have solar reflectances of up to 0.94, and thermal emittances of up to 0.96. [ 27 ] The solar reflectance of the paints arises from optical scattering by the dielectric pigments embedded in the polymer paint resin, while the thermal emittance arises from the polymer resin. However, because typical white pigments like titanium dioxide and zinc oxide absorb ultraviolet radiation, the solar reflectances of paints based on such pigments do not exceed 0.95. In 2014, researchers developed the first daytime radiative cooler using a multi-layer thermal photonic structure that selectively emits long wavelength infrared radiation into space, and can achieve 5 °C sub-ambient cooling under direct sunlight. [ 28 ] Later researchers developed paintable porous polymer coatings, whose pores scatter sunlight to give solar reflectance of 0.96-0.99 and thermal emittance of 0.97. [ 29 ] In experiments under direct sunlight, the coatings achieve 6 °C sub-ambient temperatures and cooling powers of 96 W/m 2 . Other notable radiative cooling strategies include dielectric films on metal mirrors, [ 30 ] and polymer or polymer composites on silver or aluminum films. [ 31 ] Silvered polymer films with solar reflectances of 0.97 and thermal emittance of 0.96, which remain 11 °C cooler than commercial white paints under the mid-summer sun, were reported in 2015. [ 32 ] Researchers explored designs with dielectric silicon dioxide or silicon carbide particles embedded in polymers that are translucent in the solar wavelengths and emissive in the infrared. [ 33 ] [ 34 ] In 2017, an example of this design with resonant polar silica microspheres randomly embedded in a polymeric matrix, was reported. [ 35 ] The material is translucent to sunlight and has infrared emissivity of 0.93 in the infrared atmospheric transmission window. When backed with silver coating, the material achieved a midday radiative cooling power of 93 W/m 2 under direct sunshine along with high-throughput, economical roll-to-roll manufacturing. High emissivity coatings that facilitate radiative cooling may be used in reusable thermal protection systems (RTPS) in spacecraft and hypersonic aircraft. In such heat shields a high emissivity material, such as molybdenum disilicide (MoSi 2 ) is applied on a thermally insulating ceramic substrate. [ 5 ] In such heat shields high levels of total emissivity , typically in the range 0.8 - 0.9, need to be maintained across a range of high temperatures. Planck's law dictates that at higher temperatures the radiative emission peak shifts to lower wavelengths (higher frequencies), influencing material selection as a function of operating temperature. In addition to effective radiative cooling, radiative thermal protection systems should provide damage tolerance and may incorporate self-healing functions through the formation of a viscous glass at high temperatures. The James Webb Space Telescope uses radiative cooling to reach its operation temperature of about 50 K. To do this, its large reflective sunshield blocks radiation from the Sun, Earth, and Moon. The telescope structure, kept permanently in shadow by the sunshield, then cools by radiation. Before the invention of artificial refrigeration technology, ice making by nocturnal cooling was common in both India and Iran. In India, such apparatuses consisted of a shallow ceramic tray with a thin layer of water, placed outdoors with a clear exposure to the night sky. The bottom and sides were insulated with a thick layer of hay. On a clear night the water would lose heat by radiation upwards. Provided the air was calm and not too far above freezing, heat gain from the surrounding air by convection was low enough to allow the water to freeze. [ 36 ] [ 37 ] [ 4 ] In Iran, this involved making large flat ice pools , which consisted of a reflection pool of water built on a bed of highly insulative material surrounded by high walls. The high walls provided protection against convective warming, the insulative material of the pool walls would protect against conductive heating from the ground, the large flat plane of water would then permit evaporative and radiative cooling to take place. The three basic types of radiant cooling are direct, indirect, and fluorescent:
https://en.wikipedia.org/wiki/Radiative_cooling
Radiative equilibrium is the condition where the total thermal radiation leaving an object is equal to the total thermal radiation entering it. It is one of the several requirements for thermodynamic equilibrium , but it can occur in the absence of thermodynamic equilibrium. There are various types of radiative equilibrium, which is itself a kind of dynamic equilibrium . Equilibrium , in general, is a state in which opposing forces are balanced, and hence a system does not change in time. Radiative equilibrium is the specific case of thermal equilibrium , for the case in which the exchange of heat is done by radiative heat transfer. There are several types of radiative equilibrium. An important early contribution was made by Pierre Prevost in 1791. [ 1 ] Prevost considered that what is nowadays called the photon gas or electromagnetic radiation was a fluid that he called "free heat". Prevost proposed that free radiant heat is a very rare fluid, rays of which, like light rays, pass through each other without detectable disturbance of their passage. Prevost's theory of exchanges stated that each body radiates to, and receives radiation from, other bodies. The radiation from each body is emitted regardless of the presence or absence of other bodies. [ 2 ] [ 3 ] Prevost in 1791 offered the following definitions (translated): Absolute equilibrium of free heat is the state of this fluid in a portion of space which receives as much of it as it lets escape. Relative equilibrium of free heat is the state of this fluid in two portions of space which receive from each other equal quantities of heat, and which moreover are in absolute equilibrium, or experience precisely equal changes. Prevost went on to comment that "The heat of several portions of space at the same temperature, and next to one another, is at the same time in the two species of equilibrium." Following Max Planck (1914), [ 4 ] a radiative field is often described in terms of specific radiative intensity , which is a function of each geometrical point in a space region, at an instant of time. [ 5 ] [ 6 ] This is slightly different from Prevost's mode of definition, which was for regions of space. It is also slightly conceptually different from Prevost's definition: Prevost thought in terms of bound and free heat while today we think in terms of heat in kinetic and other dynamic energy of molecules, that is to say heat in matter, and the thermal photon gas . A detailed definition is given by R. M. Goody and Y. L. Yung (1989). [ 6 ] They think of the interconversion between thermal radiation and heat in matter. From the specific radiative intensity they derive F ν {\displaystyle \mathbf {F} _{\nu }} , the monochromatic vector flux density of radiation at each point in a region of space, which is equal to the time averaged monochromatic Poynting vector at that point (D. Mihalas 1978 [ 7 ] on pages 9–11). They define the monochromatic volume-specific rate of gain of heat by matter from radiation as the negative of the divergence of the monochromatic flux density vector; it is a scalar function of the position of the point: They define (pointwise) monochromatic radiative equilibrium by They define (pointwise) radiative equilibrium by This means that, at every point of the region of space that is in (pointwise) radiative equilibrium, the total, for all frequencies of radiation, interconversion of energy between thermal radiation and energy content in matter is nil(zero). Pointwise radiative equilibrium is closely related to Prevost's absolute radiative equilibrium. D. Mihalas and B. Weibel-Mihalas (1984) [ 5 ] emphasise that this definition applies to a static medium, in which the matter is not moving. They also consider moving media. Karl Schwarzschild in 1906 [ 8 ] considered a system in which convection and radiation both operated but radiation was so much more efficient than convection that convection could be, as an approximation, neglected, and radiation could be considered predominant. This applies when the temperature is very high, as for example in a star, but not in a planet's atmosphere. Subrahmanyan Chandrasekhar (1950, page 290) [ 9 ] writes of a model of a stellar atmosphere in which "there are no mechanisms, other than radiation, for transporting heat within the atmosphere ... [and] there are no sources of heat in the surrounding" This is hardly different from Schwarzschild's 1906 approximate concept, but is more precisely stated. Planck (1914, page 40) [ 4 ] refers to a condition of thermodynamic equilibrium, in which "any two bodies or elements of bodies selected at random exchange by radiation equal amounts of heat with each other." The term radiative exchange equilibrium can also be used to refer to two specified regions of space that exchange equal amounts of radiation by emission and absorption (even when the steady state is not one of thermodynamic equilibrium, but is one in which some sub-processes include net transport of matter or energy including radiation). Radiative exchange equilibrium is very nearly the same as Prevost's relative radiative equilibrium. To a first approximation, an example of radiative exchange equilibrium is in the exchange of non-window wavelength thermal radiation between the land-and-sea surface and the lowest atmosphere, when there is a clear sky. As a first approximation (W. C. Swinbank 1963, [ 10 ] G. W. Paltridge and C. M. R. Platt 1976, pages 139–140 [ 11 ] ), in the non-window wavenumbers, there is zero net exchange between the surface and the atmosphere, while, in the window wavenumbers, there is simply direct radiation from the land-sea surface to space. A like situation occurs between adjacent layers in the turbulently mixed boundary layer of the lower troposphere , expressed in the so-called "cooling to space approximation", first noted by C. D. Rodgers and C. D. Walshaw (1966). [ 12 ] [ 13 ] [ 14 ] [ 15 ] Global radiative equilibrium can be defined for an entire passive celestial system that does not supply its own energy, such as a planet. Liou (2002, page 459) [ 16 ] and other authors use the term global radiative equilibrium to refer to radiative exchange equilibrium globally between Earth and extraterrestrial space; such authors intend to mean that, in the theoretical, incoming solar radiation absorbed by Earth's surface and its atmosphere would be equal to outgoing longwave radiation from Earth's surface and its atmosphere. Prevost [ 1 ] would say then that the Earth's surface and its atmosphere regarded as a whole were in absolute radiative equilibrium. Some texts, for example Satoh (2004), [ 17 ] simply refer to "radiative equilibrium" in referring to global exchange radiative equilibrium. The various global temperatures that may be theoretically conceived for any planet in general can be computed. Such temperatures include the planetary equilibrium temperature , equivalent blackbody temperature [ 18 ] or effective radiation emission temperature of the planet. [ 19 ] For a planet with an atmosphere, these temperatures can be different than the mean surface temperature, which may be measured as the global-mean surface air temperature , [ 20 ] or as the global-mean surface skin temperature . [ 21 ] A radiative equilibrium temperature is calculated for the case that the supply of energy from within the planet (for example, from chemical or nuclear sources) is negligibly small; this assumption is reasonable for Earth, but fails, for example, for calculating the temperature of Jupiter , for which internal energy sources are larger than the incident solar radiation, [ 22 ] and hence the actual temperature is higher than the theoretical radiative equilibrium. A star supplies its own energy from nuclear sources, and hence the temperature equilibrium cannot be defined in terms of incident energy only. Cox and Giuli (1968/1984) [ 23 ] define 'radiative equilibrium' for a star , taken as a whole and not confining attention only to its atmosphere, when the rate of transfer as heat of energy from nuclear reactions plus viscosity to the microscopic motions of the material particles of the star is just balanced by the transfer of energy by electromagnetic radiation from the star to space. Note that this radiative equilibrium is slightly different from the previous usage. They note that a star that is radiating energy to space cannot be in a steady state of temperature distribution unless there is a supply of energy, in this case, energy from nuclear reactions within the star, to support the radiation to space. Likewise the condition that is used for the above definition of pointwise radiative equilibrium cannot hold throughout a star that is radiating: internally, the star is in a steady state of temperature distribution, not internal thermodynamic equilibrium. Cox and Giuli's definition allows them to say at the same time that a star is in a steady state of temperature distribution and is in 'radiative equilibrium'; they are assuming that all the radiative energy to space comes from within the star. [ 23 ] When there is enough matter in a region to allow molecular collisions to occur very much more often than absorption or emission of photons, for radiation one speaks of local thermodynamic equilibrium (LTE) . In this case, Kirchhoff's law of equality of radiative absorptivity and emissivity holds. [ 24 ] Two bodies in radiative exchange equilibrium, each in its own local thermodynamic equilibrium, have the same temperature and their radiative exchange complies with the Stokes-Helmholtz reciprocity principle .
https://en.wikipedia.org/wiki/Radiative_equilibrium
Radiative flux, also known as radiative flux density or radiation flux (or sometimes power flux density [ 1 ] ), is the amount of power radiated through a given area, in the form of photons or other elementary particles, typically measured in W/m 2 . [ 2 ] It is used in astronomy to determine the magnitude and spectral class of a star and in meteorology to determine the intensity of the convection in the planetary boundary layer . Radiative flux also acts as a generalization of heat flux , which is equal to the radiative flux when restricted to the infrared spectrum . When radiative flux is incident on a surface, it is often called irradiance . Flux emitted from a surface may be called radiant exitance or radiant emittance . The ratio of irradiance reflected to the irradiance received by a surface is called albedo . In geophysics, shortwave flux is a result of specular and diffuse reflection of incident shortwave radiation by the underlying surface. [ 3 ] This shortwave radiation, as solar radiation, can have a profound impact on certain biophysical processes of vegetation, such as canopy photosynthesis and land surface energy budgets, by being absorbed into the soil and canopies. [ 4 ] As it is the main energy source of most weather phenomena, the solar shortwave radiation is used extensively in numerical weather prediction . Longwave flux is a product of both downwelling infrared energy as well as emission by the underlying surface. The cooling associated with the divergence of longwave radiation is necessary for creating and sustaining lasting inversion layers close to the surface during polar night. Longwave radiation flux divergence also plays a role in the formation of fog. [ 5 ]
https://en.wikipedia.org/wiki/Radiative_flux
In particle physics , a radiative process refers to one elementary particle emitting another and continuing to exist. [ 1 ] This typically happens when a fermion emits a boson such as a gluon or photon . This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiative_process
Radiators are heat exchangers used for cooling internal combustion engines , mainly in automobiles but also in piston-engined aircraft , railway locomotives , motorcycles , stationary generating plants or any similar use of such an engine. Internal combustion engines are often cooled by circulating a liquid called engine coolant through the engine block and cylinder head where it is heated, then through a radiator where it loses heat to the atmosphere, and then returned to the engine. Engine coolant is usually water-based, but may also be oil. It is common to employ a water pump to force the engine coolant to circulate, and also for an axial fan [ 1 ] to force air through the radiator. In automobiles and motorcycles with a liquid-cooled internal combustion engine , a radiator is connected to channels running through the engine and cylinder head , through which a liquid ( coolant ) is pumped by a coolant pump. This liquid may be water (in climates where water is unlikely to freeze), but is more commonly a mixture of water and antifreeze in proportions appropriate to the climate. Antifreeze itself is usually ethylene glycol or propylene glycol (with a small amount of corrosion inhibitor ). A typical automotive cooling system comprises: The combustion process produces a large amount of heat. If heat were allowed to increase unchecked, detonation would occur, and components outside the engine would fail due to excessive temperature. To combat this effect, coolant is circulated through the engine where it absorbs heat. Once the coolant absorbs the heat from the engine it continues its flow to the radiator. The radiator transfers heat from the coolant to the passing air. Radiators are also used to cool automatic transmission fluids , air conditioner refrigerant , intake air , and sometimes to cool motor oil or power steering fluid . A radiator is typically mounted in a position where it receives airflow from the forward movement of the vehicle, such as behind a front grill. Where engines are mid- or rear-mounted, it is common to mount the radiator behind a front grill to achieve sufficient airflow, even though this requires long coolant pipes. Alternatively, the radiator may draw air from the flow over the top of the vehicle or from a side-mounted grill. For long vehicles, such as buses, side airflow is most common for engine and transmission cooling and top airflow most common for air conditioner cooling. Automobile radiators are constructed of a pair of metal or plastic header tanks, linked by a core with many narrow passageways, giving a high surface area relative to volume. This core is usually made of stacked layers of metal sheet, pressed to form channels and soldered or brazed together. For many years radiators were made from brass or copper cores soldered to brass headers. Modern radiators have aluminum cores, and often save money and weight by using plastic headers with gaskets. This construction is more prone to failure and less easily repaired than traditional materials. An earlier construction method was the honeycomb radiator. Round tubes were swaged into hexagons at their ends, then stacked together and soldered. As they only touched at their ends, this formed what became in effect a solid water tank with many air tubes through it. [ 2 ] Some vintage cars use radiator cores made from coiled tube, a less efficient but simpler construction. Radiators first used downward vertical flow, driven solely by a thermosyphon effect. Coolant is heated in the engine, becomes less dense, and so rises. As the radiator cools the fluid, the coolant becomes denser and falls. This effect is sufficient for low-power stationary engines , but inadequate for all but the earliest automobiles. All automobiles for many years have used centrifugal pumps to circulate the engine coolant because natural circulation has very low flow rates. A system of valves or baffles, or both, is usually incorporated to simultaneously operate a small radiator inside the vehicle. This small radiator, and the associated blower fan, is called the heater core , and serves to warm the cabin interior. Like the radiator, the heater core acts by removing heat from the engine. For this reason, automotive technicians often advise operators to turn on the heater and set it to high if the engine is overheating , to assist the main radiator. The engine temperature on modern cars is primarily controlled by a wax-pellet type of thermostat , a valve that opens once the engine has reached its optimum operating temperature . When the engine is cold, the thermostat is closed except for a small bypass flow so that the thermostat experiences changes to the coolant temperature as the engine warms up. Engine coolant is directed by the thermostat to the inlet of the circulating pump and is returned directly to the engine, bypassing the radiator. Directing water to circulate only through the engine allows the engine to reach optimum operating temperature as quickly as possible whilst avoiding localized "hot spots." Once the coolant reaches the thermostat's activation temperature, it opens, allowing water to flow through the radiator to prevent the temperature from rising higher. Once at optimum temperature, the thermostat controls the flow of engine coolant to the radiator so that the engine continues to operate at optimum temperature. Under peak load conditions, such as driving slowly up a steep hill whilst heavily laden on a hot day, the thermostat will be approaching fully open because the engine will be producing near maximum power while the velocity of airflow across the radiator is low. (Being a heat exchanger, the velocity of air flow across the radiator has a major effect on its ability to dissipate heat.) Conversely, when cruising fast downhill on a motorway on a cold night on a light throttle, the thermostat will be nearly closed because the engine is producing little power, and the radiator is able to dissipate much more heat than the engine is producing. Allowing too much flow of coolant to the radiator would result in the engine being over-cooled and operating at lower than optimum temperature, resulting in decreased fuel efficiency and increased exhaust emissions. Furthermore, engine durability, reliability, and longevity are sometimes compromised, if any components (such as the crankshaft bearings) are engineered to take thermal expansion into account to fit together with the correct clearances. Another side effect of over-cooling is reduced performance of the cabin heater, though in typical cases it still blows air at a considerably higher temperature than ambient. The thermostat is therefore constantly moving throughout its range, responding to changes in vehicle operating load, speed, and external temperature, to keep the engine at its optimum operating temperature. On vintage cars you may find a bellows type thermostat, which has corrugated bellows containing a volatile liquid such as alcohol or acetone. These types of thermostats do not work well at cooling system pressures above about 7 psi. Modern motor vehicles typically run at around 15 psi, which precludes the use of the bellows type thermostat. On direct air-cooled engines, this is not a concern for the bellows thermostat that controls a flap valve in the air passages. Other factors influence the temperature of the engine, including radiator size and the type of radiator fan. The size of the radiator (and thus its cooling capacity ) is chosen such that it can keep the engine at the design temperature under the most extreme conditions a vehicle is likely to encounter (such as climbing a mountain whilst fully loaded on a hot day). Airflow speed through a radiator is a major influence on the heat it dissipates. Vehicle speed affects this, in rough proportion to the engine effort, thus giving crude self-regulatory feedback. Where an additional cooling fan is driven by the engine, this also tracks engine speed similarly. Engine-driven fans are often regulated by a fan clutch from the drivebelt, which slips and reduces the fan speed at low temperatures. This improves fuel efficiency by not wasting power on driving the fan unnecessarily. On modern vehicles, further regulation of cooling rate is provided by either variable speed or cycling radiator fans. Electric fans are controlled by a thermostatic switch or the engine control unit . Electric fans also have the advantage of giving good airflow and cooling at low engine revs or when stationary, such as in slow-moving traffic. Before the development of viscous-drive and electric fans, engines were fitted with simple fixed fans that drew air through the radiator at all times. Vehicles whose design required the installation of a large radiator to cope with heavy work at high temperatures, such as commercial vehicles and tractors would often run cool in cold weather under light loads, even with the presence of a thermostat , as the large radiator and fixed fan caused a rapid and significant drop in coolant temperature as soon as the thermostat opened. This problem can be solved by fitting a radiator blind (or radiator shroud ) to the radiator that can be adjusted to partially or fully block the airflow through the radiator. At its simplest the blind is a roll of material such as canvas or rubber that is unfurled along the length of the radiator to cover the desired portion. Some older vehicles, like the World War I-era Royal Aircraft Factory S.E.5 and SPAD S.XIII single-engined fighters, have a series of shutters that can be adjusted from the driver's or pilot's seat to provide a degree of control. Some modern cars have a series of shutters that are automatically opened and closed by the engine control unit to provide a balance of cooling and aerodynamics as needed. [ 3 ] Because the thermal efficiency of internal combustion engines increases with internal temperature, the coolant is kept at higher-than-atmospheric pressure to increase its boiling point . A calibrated pressure-relief valve is usually incorporated in the radiator's fill cap. This pressure varies between models, but typically ranges from 4 to 30 psi (30 to 200 kPa). [ 4 ] As the coolant system pressure increases with a rise in temperature, it will reach the point where the pressure relief valve allows excess pressure to escape. This will stop when the system temperature stops rising. In the case of an over-filled radiator (or header tank) pressure is vented by allowing a little liquid to escape. This may simply drain onto the ground or be collected in a vented container which remains at atmospheric pressure. When the engine is switched off, the cooling system cools and liquid level drops. In some cases where excess liquid has been collected in a bottle, this may be 'sucked' back into the main coolant circuit. In other cases, it is not. Before World War II, engine coolant was usually plain water. Antifreeze was used solely to control freezing, and this was often only done in cold weather. If plain water is left to freeze in the block of an engine the water can expand as it freezes. This effect can cause severe internal engine damage due to the expanding of the ice. Development in high-performance aircraft engines required improved coolants with higher boiling points, leading to the adoption of glycol or water-glycol mixtures. These led to the adoption of glycols for their antifreeze properties. Since the development of aluminium alloy or mixed-metal engines, corrosion inhibition has become even more important than antifreeze, and in all regions and seasons. An overflow tank that runs dry may result in the coolant vaporizing, which can cause localized or general overheating of the engine. Severe damage may result if the vehicle is allowed to run over temperature. Failures such as blown head gaskets, and warped or cracked cylinder heads or cylinder blocks may be the result. Sometimes there will be no warning, because the temperature sensor that provides data for the temperature gauge (either mechanical or electrical) is exposed to water vapor, not the liquid coolant, providing a harmfully false reading. Opening a hot radiator drops the system pressure, which may cause it to boil and eject dangerously hot liquid and steam. Therefore, radiator caps often contain a mechanism that attempts to relieve the internal pressure before the cap can be fully opened. The invention of the automobile water radiator is attributed to Karl Benz . Wilhelm Maybach designed the first honeycomb radiator for the Mercedes 35hp . [ 5 ] It is sometimes necessary for a car to be equipped with a second, or auxiliary, radiator to increase the cooling capacity, when the size of the original radiator cannot be increased. The second radiator is plumbed in series with the main radiator in the circuit. This was the case when the Audi 100 was first turbocharged creating the 200. These are not to be confused with intercoolers . Some engines have an oil cooler, a separate small radiator to cool the engine oil . Cars with an automatic transmission often have extra connections to the radiator, allowing the transmission fluid to transfer its heat to the coolant in the radiator. These may be either oil-air radiators, as for a smaller version of the main radiator. More simply they may be oil-water coolers, where an oil pipe is inserted inside the water radiator. Though the water is hotter than the ambient air, its higher thermal conductivity offers comparable cooling (within limits) from a less complex and thus cheaper and more reliable [ citation needed ] oil cooler. Less commonly, power steering fluid, brake fluid, and other hydraulic fluids may be cooled by an auxiliary radiator on a vehicle. Turbo charged or supercharged engines may have an intercooler , which is an air-to-air or air-to-water radiator used to cool the incoming air charge—not to cool the engine. Aircraft with liquid-cooled piston engines (usually inline engines rather than radial) also require radiators. As airspeed is higher than for cars, these are efficiently cooled in flight, and so do not require large areas or cooling fans. Many high-performance aircraft however suffer extreme overheating problems when idling on the ground - a mere seven minutes for a Spitfire . [ 6 ] This is similar to Formula 1 cars of today, when stopped on the grid with engines running they require ducted air forced into their radiator pods to prevent overheating. Reducing drag is a major goal in aircraft design, including the design of cooling systems. An early technique was to take advantage of an aircraft's abundant airflow to replace the honeycomb core (many surfaces, with a high ratio of surface to volume) by a surface-mounted radiator. This uses a single surface blended into the fuselage or wing skin, with the coolant flowing through pipes at the back of this surface. Such designs were seen mostly on World War I aircraft. As they are so dependent on airspeed, surface radiators are even more prone to overheating when ground-running. Racing aircraft such as the Supermarine S.6B , a racing seaplane with radiators built into the upper surfaces of its floats, have been described as "being flown on the temperature gauge" as the main limit on their performance. [ 7 ] Surface radiators have also been used by a few high-speed racing cars, such as Malcolm Campbell 's Blue Bird of 1928. It is generally a limitation of most cooling systems that the cooling fluid not be allowed to boil, as the need to handle gas in the flow greatly complicates design. For a water cooled system, this means that the maximum amount of heat transfer is limited by the specific heat capacity of water and the difference in temperature between ambient and 100 °C. This provides more effective cooling in the winter, or at higher altitudes where the temperatures are low. Another effect that is especially important in aircraft cooling is that the specific heat capacity changes and boiling point reduces with pressure, and this pressure changes more rapidly with altitude than the drop in temperature. Thus, generally, liquid cooling systems lose capacity as the aircraft climbs. This was a major limit on performance during the 1930s when the introduction of turbosuperchargers first allowed convenient travel at altitudes above 15,000 ft, and cooling design became a major area of research. The most obvious, and common, solution to this problem was to run the entire cooling system under pressure. This maintained the specific heat capacity at a constant value, while the outside air temperature continued to drop. Such systems thus improved cooling capability as they climbed. For most uses, this solved the problem of cooling high-performance piston engines, and almost all liquid-cooled aircraft engines of the World War II period used this solution. However, pressurized systems were also more complex, and far more susceptible to damage - as the cooling fluid was under pressure, even minor damage in the cooling system like a single rifle-calibre bullet hole, would cause the liquid to rapidly spray out of the hole. Failures of the cooling systems were, by far, the leading cause of engine failures. Although it is more difficult to build an aircraft radiator that is able to handle steam, it is by no means impossible. The key requirement is to provide a system that condenses the steam back into liquid before passing it back into the pumps and completing the cooling loop. Such a system can take advantage of the specific heat of vaporization , which in the case of water is five times the specific heat capacity in the liquid form. Additional gains may be had by allowing the steam to become superheated. Such systems, known as evaporative coolers , were the topic of considerable research in the 1930s. Consider two cooling systems that are otherwise similar, operating at an ambient air temperature of 20 °C. An all-liquid design might operate between 30 °C and 90 °C, offering 60 °C of temperature difference to carry away heat. An evaporative cooling system might operate between 80 °C and 110 °C. At first glance this appears to be much less temperature difference, but this analysis overlooks the enormous amount of heat energy soaked up during the generation of steam, equivalent to 500 °C. In effect, the evaporative version is operating between 80 °C and 560 °C, a 480 °C effective temperature difference. Such a system can be effective even with much smaller amounts of water. The downside to the evaporative cooling system is the area of the condensers required to cool the steam back below the boiling point. As steam is much less dense than water, a correspondingly larger surface area is needed to provide enough airflow to cool the steam back down. The Rolls-Royce Goshawk design of 1933 used conventional radiator-like condensers and this design proved to be a serious problem for drag. In Germany, the Günter brothers developed an alternative design combining evaporative cooling and surface radiators spread all over the aircraft wings, fuselage and even the rudder. Several aircraft were built using their design and set numerous performance records, notably the Heinkel He 119 and Heinkel He 100 . However, these systems required numerous pumps to return the liquid from the spread-out radiators and proved to be extremely difficult to keep running properly, and were much more susceptible to battle damage. Efforts to develop this system had generally been abandoned by 1940. The need for evaporative cooling was soon to be negated by the widespread availability of ethylene glycol based coolants, which had a lower specific heat , but a much higher boiling point than water. An aircraft radiator contained in a duct heats the air passing through, causing the air to expand and gain velocity. This is called the Meredith effect , and high-performance piston aircraft with well-designed low-drag radiators (notably the P-51 Mustang ) derive thrust from it. The thrust was significant enough to offset the drag of the duct the radiator was enclosed in and allowed the aircraft to achieve zero cooling drag. At one point, there were even plans to equip a Mustang with an afterburner , by injecting fuel into the exhaust duct after the radiator and igniting it. [ 8 ] Afterburning is achieved by injecting additional fuel into the engine downstream of the main combustion cycle. Engines for stationary plant are normally cooled by radiators in the same way as automobile engines. There are some unique differences, depending on the stationary plant – careful planning must be taken to ensure proper air flow across the radiator to ensure proper cooling. In some cases, evaporative cooling is used via a cooling tower . [ 9 ]
https://en.wikipedia.org/wiki/Radiator_(engine_cooling)
Radiators and convectors are heat exchangers designed to transfer thermal energy from one medium to another for the purpose of space heating. Denison Olmsted of New Haven, Connecticut, appears to have been the earliest person to use the term 'radiator' to mean a heating appliance in an 1834 patent for a stove with a heat exchanger which then radiated heat. In the patent he wrote that his invention was "a peculiar kind of apparatus, which I call a radiator". [ 1 ] The heating radiator was invented by Franz San Galli in 1855, a Kingdom of Prussia -born Russian businessman living in St. Petersburg . [ 2 ] [ 3 ] In the late 1800s, companies, such as the American Radiator Company , promoted cast iron radiators over previous fabricated steel designs in order to lower costs and expand the market. A radiator is a device that transfers heat to a medium primarily through thermal radiation . In practice, the term radiator is often applied to any number of devices in which a fluid circulates through exposed pipes (often with fins or other means of increasing surface area), notwithstanding that such devices tend to transfer heat mainly by convection and might logically be called convectors . The terms convection heater and convector refer to a class of devices in which the source of heat is not directly exposed. As domestic safety and the supply from water heaters keep temperatures relatively low, radiation is inefficient in comparison to convection. [ citation needed ] Steam has the advantage of flowing through pipes under its own pressure without the need for pumping. For this reason, it was adopted earlier, before electric motors and pumps became available. Steam is also far easier to distribute than hot water throughout large, tall buildings like skyscrapers . However, the higher temperatures at which steam systems operate make them inherently less efficient, as unwanted heat loss is inevitably greater. Steam pipes and radiators are prone to producing banging sounds called steam hammer . The bang is created when some of the steam condenses into water in a horizontal section of the steam piping. Subsequently, steam picks up the water, forms a "slug" and hurls it at high velocity into a pipe fitting, creating a loud hammering noise and greatly stressing the pipe. This condition is usually caused by a poor condensate drainage strategy and is often caused by buildings settling and the resultant pooling of condensate in pipes and radiators that no longer tilt slightly back towards the boiler . [ citation needed ] A hot-water radiator consists of a sealed hollow metal container filled with hot water from a boiler or other heating device by gravity feed, a pump, or natural convection . As it gives out heat, the hot water cools and sinks to the bottom of the radiator and is forced out of a pipe at the other end. Anti-hammer devices are often installed to prevent or minimize knocking in hot water radiator pipes. Unlike steam or hot water systems which receive heat from a boiler, electric radiators produce heat from electricity at the location of the radiator. This heat may be transferred to a fluid (such as oil) inside the radiator. The oil circulates inside the radiator by convection, which distributes the heat from the heating element to the surface of the radiator. Smaller electric radiators have the advantage of being portable, as they do not need to be connected to pipework. Some electric radiators can also use hot water; this is particularity common for heated towel rails , where the radiator uses hot water when the central heating system is running but switches to electricity when heating the whole building is not required. Cast iron radiators may be used with hot water or steam systems. Traditional cast iron radiators are no longer common in new construction, replaced mostly with forced hot water baseboard or panel radiators, but they remain available. Hot-water baseboard convectors (often referred to as "fin-tube radiators") consist of copper pipes which have aluminum fins attached to increase their surface area. Conduction is used to transfer heat from the water circulated in the pipes into the metal radiators or convectors. Baseboard convectors are designed to heat the air in the room using convection to transfer heat from the radiators to the surrounding air. [ 4 ] They do this by drawing cool air in at the bottom, warming the air as it passes over the radiator fins, and discharging the heated air at the top. This sets up convective loops of air movement within a room. If the radiator is blocked either from above or below, this air movement is prevented, and the heater will not work. Baseboard heating systems are sometimes fitted with moveable covers to allow the resident to fine-tune heating by room, much like air registers in a central air system. Panel radiators are welded from flat or corrugated steel panels, and are usually hung from the wall. They are usually used with hot water systems, but electric versions are also available. The panels often have fins attached, which increases the surface area and therefore the amount of heat that can be transferred into the air. Several panels may be stacked together to make one radiator, and the resulting radiator is referred to with a two-digit type number. The first digit is the number of panels, and the second is the number of sets of fins, for example a type 21 radiator has two panels with one set of fins in between. Air flow around the radiator and between the panels is by convection only, and must be unrestricted if the radiator is to reach its design performance. The heat output of panel radiators is regulated by controlling the flow of hot water, with either a manual or a thermostatic valve. Radiators can also be made from aluminium which is a very good conductor of heat and has better thermal conductivity compared to that of steel. Aluminium radiators tend to have a low water content so this combined with the excellent thermal conductivity qualities of the metal itself make aluminium radiators very responsive to changes in the temperature demand. [ 5 ] A fan-assisted convector contains a heat exchanger fed by hot water from the heating system. A thermostatic switch energises an electric fan which blows air over the heat exchanger to circulate it in a room. Its advantages are small relative size and even distribution of heat. Disadvantages are fan noise and the need for both a source of heat and a separate electrical supply. Also known as "radiant heat", underfloor heating uses a network of pipes, tubing or heating cables, buried in or attached beneath a floor to allow heat to rise into the room. Best results are achieved with conductive flooring materials such as tile. The large surface area of such room-sized radiators allows them to be kept just a few degrees above desired room temperature, minimizing convection . Underfloor heating is more expensive in new construction than less efficient systems. It also is generally difficult to retrofit into existing buildings. The Roman hypocaust employed a similar principle of operation. Skirting-board radiators are a form of heater which involves placing radiators inside a skirting board. Hot water is piped though the system, usually taken directly from the central heating system. [ 6 ] Radiators can lower indoor humidity , which may contribute to dry skin, lower physical comfort, and shrinkage of wood flooring (for example) as a result of warming the air. However, a humidifier can be used to increase the humidity. [ 7 ]
https://en.wikipedia.org/wiki/Radiator_(heating)
Radical-nucleophilic aromatic substitution or S RN 1 in organic chemistry is a type of substitution reaction in which a certain substituent on an aromatic compound is replaced by a nucleophile through an intermediary free radical species: The substituent X is a halide and nucleophiles can be sodium amide , an alkoxide or a carbon nucleophile such as an enolate . [ 1 ] In contrast to regular nucleophilic aromatic substitution , deactivating groups on the arene are not required. [ 2 ] This reaction type was discovered in 1970 by Bunnett and Kim [ 3 ] and the abbreviation S RN 1 stands for substitution radical-nucleophilic unimolecular as it shares properties with an aliphatic S N 1 reaction . An example of this reaction type is the Sandmeyer reaction . In this radical substitution the aryl halide 1 accepts an electron from a radical initiator forming a radical anion 2 . This intermediate collapses into an aryl radical 3 and a halide anion. The aryl radical reacts with the nucleophile 4 to a new radical anion 5 which goes on to form the substituted product by transferring its electron to new aryl halide in the chain propagation . Alternatively the phenyl radical can abstract any loose proton from 7 forming the arene 8 in a chain termination reaction. The involvement of a radical intermediate in a new type of nucleophilic aromatic substitution was invoked when the product distribution was compared between a certain aromatic chloride and an aromatic iodide in reaction with potassium amide. The chloride reaction proceeds through a classical aryne intermediate: The isomers 1a and 1b form the same aryne 2 which continues to react to the anilines 3a and 3b in a 1 to 1.5 ratio. Clear-cut cine -substitution would give a 1:1 ratio, but additional steric and electronic factors come into play as well. Replacing chlorine by iodine in the 1,2,4-trimethylbenzene moiety drastically changes the product distribution: It now resembles ipso -substitution with 1a forming preferentially 3a and 1b forming 3b . Radical scavengers suppress ipso -substitution in favor of cine -substitution and the addition of potassium metal as an electron donor and radical initiator does exactly the opposite. [ 4 ]
https://en.wikipedia.org/wiki/Radical-nucleophilic_aromatic_substitution
In chemistry , a radical , also known as a free radical , is an atom , molecule , or ion that has at least one unpaired valence electron . [ 1 ] [ 2 ] With some exceptions, these unpaired electrons make radicals highly chemically reactive . Many radicals spontaneously dimerize . Most organic radicals have short lifetimes. A notable example of a radical is the hydroxyl radical (HO · ), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene ( ꞉ CH 2 ) which have two unpaired electrons. Radicals may be generated in a number of ways, but typical methods involve redox reactions . Ionizing radiation , heat, electrical discharges, and electrolysis are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations. Radicals are important in combustion , atmospheric chemistry , polymerization , plasma chemistry, biochemistry , and many other chemical processes. A majority of natural products are generated by radical-generating enzymes. In living organisms, the radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling . A radical may be trapped within a solvent cage or be otherwise bound. Radicals are either (1) formed from spin-paired molecules or (2) from other radicals. Radicals are formed from spin-paired molecules through homolysis of weak bonds or electron transfer, also known as reduction. Radicals are formed from other radicals through substitution, addition , and elimination reactions. Homolysis makes two new radicals from a spin-paired molecule by breaking a covalent bond, leaving each of the fragments with one of the electrons in the bond. [ 3 ] The homolytic bond dissociation energies , usually abbreviated as "Δ H °" are a measure of bond strength. Splitting H 2 into 2 H • , for example, requires a Δ H ° of +435 kJ/mol , while splitting Cl 2 into two Cl • requires a Δ H ° of +243 kJ/mol. For weak bonds, homolysis can be induced thermally. Strong bonds require high energy photons or even flames to induce homolysis. [ citation needed ] Some homolysis reactions are particularly important because they serve as an initiator for other radical reactions. One such example is the homolysis of halogens, which occurs under light and serves as the driving force for radical halogenation reactions. Another notable reaction is the homolysis of dibenzoyl peroxide, which results in the formation of two benzoyloxy radicals and acts as an initiator for many radical reactions. [ 4 ] Classically, radicals form by one-electron reductions . Typically one-electron reduced organic compounds are unstable. Stability is conferred to the radical anion when the charge can be delocalized . Examples include alkali metal naphthenides , anthracenides , and ketyls . Hydrogen abstraction generates radicals. To achieve this reaction, the C-H bond of the H-atom donor must be weak, which is rarely the case in organic compounds. Allylic and especially doubly allylic C-H bonds are prone to abstraction by O 2 . This reaction is the basis of drying oils , such as linoleic acid derivatives. In free-radical additions , a radical adds to a spin-paired substrate. When applied to organic compounds, the reaction usually entails addition to an alkene. This addition generates a new radical, which can add to yet another alkene, etc. This behavior underpins radical polymerization , technology that produces many plastics . [ 5 ] [ 6 ] Radical elimination can be viewed as the reverse of radical addition. In radical elimination, an unstable radical compound breaks down into a spin-paired molecule and a new radical compound. Shown below is an example of a radical elimination reaction, where a benzoyloxy radical breaks down into a phenyl radical and a carbon dioxide molecule. [ 7 ] A large variety of inorganic radicals, as well as a smaller number of organic radicals, are stable and in fact isolable. Nitric oxide (NO) is well known example of an isolable inorganic radical, and Fremy's salt (Potassium nitrosodisulfonate, (KSO 3 ) 2 NO) is a related example. Many thiazyl radicals are known, despite limited π resonance stabilization (see below). [ 8 ] [ 9 ] The term "stable radical" bears a pernicious ambiguity. Radicals' behavior varies with distinct thermodynamic and kinetic stabilities, and no general rule connects the two. For example, resonance delocalization thermodynamically stabilizes benzyl radicals, but those radicals undergo rapid, diffusion -limited dimerization. Under normal conditions, their kinetic lifetime measures in nanoseconds . [ 10 ] Conversely, H • is highly reactive (thermodynamically unstable), but also the most abundant chemical in the universe (kinetically stable) because it exists primarily in low-density environments. [ citation needed ] Following Griller and Ingold's extremely influential 1976 review, [ 10 ] modern chemists call a carbon-centered radical R • stabilized if the corresponding R–H bond is weaker than in an alkane ; the radical is persistent if the radical lifetime lasts longer than the encounter limit. [ 11 ] Persistence is almost exclusively a steric effect. [ 10 ] However, orbitals of high angular momentum ( d or f ), delocalization, and the α effect can all make organic radicals stabilized. The radical of commerce 2,2,6,6-tetramethylpiperidinyloxyl (TEMPO) illustrates these phenomena: the methyl substituents shield the N -hydroxypiperidinyl core radical for persistence; and the vicinal nitrogen and oxygen lone pairs weaken any bonds that might form to oxygen, keeping the radical stabilized. Consequently TEMPO behaves, aside from its paramagnetism , like a normal organic compound. [ 3 ] [ better source needed ] In molecular orbital theory , a radical electronic structure is characterized by a highest-energy filled molecular orbital that contains only an unpaired electron. That orbital is called the "singly-occupied molecular orbital" or SOMO, and is traditionally filled spin-up without loss of generality . [ 3 ] : 977 Radical compounds are thermodynamically unstable because fixed nuclear positions cannot simultaneously minimize the filled spin-up orbital energies (which include the SOMO) and the filled spin-down orbital energies (which do not). Thus a SOMO whose energy depends little on nuclear position can produce a relatively stabilized radical. [ citation needed ] Two common types of such SOMOs are a d orbital, [ 12 ] which requires only Jahn-Teller distortion ; [ citation needed ] and a SOMO delocalized over a large portion of the molecule or crystal, [ 13 ] : 649–650 which requires little motion at each nucleus. [ citation needed ] SOMOs can in principle be of any type, but amongst the main group atoms, almost all known stable radicals have a π-type SOMO. [ 11 ] Consequently SOMOs delocalize like other π bonds: to nearby lone pairs on hydroxyl groups (−OH), ethers (−OR), or amines (−NH 2 or −NR); to conjugated π bonds in alkenes , carbonyls , or nitriles ; or in hyperconjugation to nearby hydrogen - or fluorine -rich moieties. [ 14 ] Many of the above functional groups are electron-donating , but electron donation is not necessary to achieve SOMO delocalization, and electron withdrawal functions just as well. [ 3 ] : 978 Indeed, radicals are particularly stable if they can delocalize into both an electron-withdrawing and an electron-donating group, the " capto-dative effect ". [ 15 ] In the electron-donating case, the SOMO interacts with the lower energy lone pair to form a new, lower-energy, filled, delocalized bond orbital and a new, higher-energy antibonding SOMO (in net, a three-electron bond ). Because the new bonding orbital contains more electrons than the SOMO, the resulting electronic state reduces molecular energy. [ 3 ] : 979 In the electron-withdrawing case, the SOMO interacts with an empty σ* or π* antibonding orbital. That antibonding orbital has less energy than the isolated SOMO, as does the resulting hybrid orbital . [ 3 ] : 978 The stability of many (or most) organic radicals is not indicated by their isolability but is manifested in their ability to function as donors of H • . This property reflects a weakened bond to hydrogen, usually O−H but sometimes N−H or C−H. This behavior is important because these H • donors serve as antioxidants in biology and in commerce. Illustrative is α-tocopherol ( vitamin E ). The tocopherol radical itself is insufficiently stable for isolation, but the parent molecule is a highly effective hydrogen-atom donor. The C−H bond is weakened in triphenylmethyl (trityl) derivatives. [ citation needed ] Most main-group radicals are in notional equilibrium with closed-shell dimers. For example, nitrogen dioxide equilibrates with dinitrogen tetroxide , and tributyltin radicals equilibrate with hexabutyldistannane [ de ] . Consequently radicals may be stabilized when the dimeric bond is weak. For example, compounds with a radical localized to atoms with adjacent lone pairs experience a powerful α effect when dimerized, such that the dimer may practically never form. [ 16 ] Likewise, the quinonic loss of aromaticity in Gomberg's dimer predisposes the compound towards homolysis. In other cases, radical dimers may form a " π dimer ", analogous to a donor-acceptor complex but without charge transfer. [ 17 ] Diradicals are molecules containing two radical centers. Dioxygen (O 2 ) is an important example of a stable diradical. Singlet oxygen , the lowest-energy non-radical state of dioxygen, is less stable than the diradical due to Hund's rule of maximum multiplicity . The relative stability of the oxygen diradical is primarily due to the spin-forbidden nature of the triplet-singlet transition required for it to grab electrons, i.e., " oxidize ". The diradical state of oxygen also results in its paramagnetic character, which is demonstrated by its attraction to an external magnet. [ 18 ] Diradicals can also occur in metal-oxo complexes , lending themselves for studies of spin forbidden reactions in transition metal chemistry. [ 19 ] Carbenes in their triplet state can be viewed as diradicals centred on the same atom, while these are usually highly reactive persistent carbenes are known, with N-heterocyclic carbenes being the most common example. Triplet carbenes and nitrenes are diradicals. Their chemical properties are distinct from the properties of their singlet analogues. A familiar radical reaction is combustion . The oxygen molecule is a stable diradical , best represented by • O–O • . Because spins of the electrons are parallel, this molecule is stable. While the ground state of oxygen is this unreactive spin-unpaired ( triplet ) diradical, an extremely reactive spin-paired ( singlet ) state is available. For combustion to occur, the energy barrier between these must be overcome. This barrier can be overcome by heat, requiring high temperatures. The triplet-singlet transition is also " forbidden ". This presents an additional barrier to the reaction. It also means molecular oxygen is relatively unreactive at room temperature except in the presence of a catalytic heavy atom such as iron or copper. Combustion consists of various radical chain reactions that the singlet radical can initiate. The flammability of a given material strongly depends on the concentration of radicals that must be obtained before initiation and propagation reactions dominate leading to combustion of the material. Once the combustible material has been consumed, termination reactions again dominate and the flame dies out. As indicated, promotion of propagation or termination reactions alters flammability. For example, because lead itself deactivates radicals in the gasoline-air mixture, tetraethyl lead was once commonly added to gasoline. This prevents the combustion from initiating in an uncontrolled manner or in unburnt residues ( engine knocking ) or premature ignition ( preignition ). When a hydrocarbon is burned, a large number of different oxygen radicals are involved. Initially, hydroperoxyl radical (HOO • ) are formed. These then react further to give organic hydroperoxides that break up into hydroxyl radicals (HO • ). Many polymerization reactions are initiated by radicals. Polymerization involves an initial radical adding to non-radical (usually an alkene) to give new radicals. This process is the basis of the radical chain reaction . The art of polymerization entails the method by which the initiating radical is introduced. For example, methyl methacrylate (MMA) can be polymerized to produce Poly(methyl methacrylate) (PMMA – Plexiglas or Perspex) via a repeating series of radical addition steps: Newer radical polymerization methods are known as living radical polymerization . Variants include reversible addition-fragmentation chain transfer ( RAFT ) and atom transfer radical polymerization ( ATRP ). Being a prevalent radical, O 2 reacts with many organic compounds to generate radicals together with the hydroperoxide radical. Drying oils and alkyd paints harden due to radical crosslinking initiated by oxygen from the atmosphere. The most common radical in the lower atmosphere is molecular dioxygen. Photodissociation of source molecules produces other radicals. In the lower atmosphere, important radical are produced by the photodissociation of nitrogen dioxide to an oxygen atom and nitric oxide (see eq. 1.1 below), which plays a key role in smog formation—and the photodissociation of ozone to give the excited oxygen atom O(1D) (see eq. 1.2 below). The net and return reactions are also shown ( eq. 1.3 and eq. 1.4 , respectively). In the upper atmosphere, the photodissociation of normally unreactive chlorofluorocarbons (CFCs) by solar ultraviolet radiation is an important source of radicals (see eq. 1 below). These reactions give the chlorine radical, Cl • , which catalyzes the conversion of ozone to O 2 , thus facilitating ozone depletion ( eq. 2.2 – eq. 2.4 below). Such reactions cause the depletion of the ozone layer , especially since the chlorine radical is free to engage in another reaction chain; consequently, the use of chlorofluorocarbons as refrigerants has been restricted. Radicals play important roles in biology. Many of these are necessary for life, such as the intracellular killing of bacteria by phagocytic cells such as granulocytes and macrophages . Radicals are involved in cell signalling processes, [ 21 ] known as redox signaling . For example, radical attack of linoleic acid produces a series of 13-hydroxyoctadecadienoic acids and 9-hydroxyoctadecadienoic acids , which may act to regulate localized tissue inflammatory and/or healing responses, pain perception, and the proliferation of malignant cells. Radical attacks on arachidonic acid and docosahexaenoic acid produce a similar but broader array of signaling products. [ 22 ] Radicals may also be involved in Parkinson's disease , senile and drug-induced deafness , schizophrenia , and Alzheimer's . [ 23 ] The classic free-radical syndrome, the iron-storage disease hemochromatosis , is typically associated with a constellation of free-radical-related symptoms including movement disorder, psychosis, skin pigmentary melanin abnormalities, deafness, arthritis, and diabetes mellitus. The free-radical theory of aging proposes that radicals underlie the aging process itself. Similarly, the process of mito hormesis suggests that repeated exposure to radicals may extend life span. Because radicals are necessary for life, the body has a number of mechanisms to minimize radical-induced damage and to repair damage that occurs, such as the enzymes superoxide dismutase , catalase , glutathione peroxidase and glutathione reductase . In addition, antioxidants play a key role in these defense mechanisms. These are often the three vitamins, vitamin A , vitamin C and vitamin E and polyphenol antioxidants . Furthermore, there is good evidence indicating that bilirubin and uric acid can act as antioxidants to help neutralize certain radicals. Bilirubin comes from the breakdown of red blood cells ' contents, while uric acid is a breakdown product of purines . Too much bilirubin, though, can lead to jaundice , which could eventually damage the central nervous system, while too much uric acid causes gout . [ 24 ] Reactive oxygen species or ROS are species such as superoxide , hydrogen peroxide , and hydroxyl radical , commonly associated with cell damage. ROS form as a natural by-product of the normal metabolism of oxygen and have important roles in cell signaling. Two important oxygen-centered radicals are superoxide and hydroxyl radical . They derive from molecular oxygen under reducing conditions. However, because of their reactivity, these same radicals can participate in unwanted side reactions resulting in cell damage. Excessive amounts of these radicals can lead to cell injury and death , which may contribute to many diseases such as cancer , stroke , myocardial infarction , diabetes and major disorders. [ 25 ] Many forms of cancer are thought to be the result of reactions between radicals and DNA , potentially resulting in mutations that can adversely affect the cell cycle and potentially lead to malignancy. [ 26 ] Some of the symptoms of aging such as atherosclerosis are also attributed to radical induced oxidation of cholesterol to 7-ketocholesterol. [ 27 ] In addition radicals contribute to alcohol -induced liver damage, perhaps more than alcohol itself. Radicals produced by cigarette smoke are implicated in inactivation of alpha 1-antitrypsin in the lung . This process promotes the development of emphysema . Oxybenzone has been found to form radicals in sunlight, and therefore may be associated with cell damage as well. This only occurred when it was combined with other ingredients commonly found in sunscreens, like titanium oxide and octyl methoxycinnamate . [ 28 ] ROS attack the polyunsaturated fatty acid , linoleic acid , to form a series of 13-hydroxyoctadecadienoic acid and 9-hydroxyoctadecadienoic acid products that serve as signaling molecules that may trigger responses that counter the tissue injury which caused their formation. ROS attacks other polyunsaturated fatty acids, e.g. arachidonic acid and docosahexaenoic acid , to produce a similar series of signaling products. [ 29 ] Reactive oxygen species are also used in controlled reactions involving singlet dioxygen 1 O 2 {\displaystyle {}^{1}\mathrm {O} _{2}} known as type II photooxygenation reactions after Dexter energy transfer ( triplet-triplet annihilation ) from natural triplet dioxygen 3 O 2 {\displaystyle {}^{3}\mathrm {O} _{2}} and triplet excited state of a photosensitizer. Typical chemical transformations with this singlet dioxygen species involve, among others, conversion of cellulosic biowaste into new poylmethine dyes. [ 30 ] In chemical equations, radicals are frequently denoted by a dot placed immediately to the right of the atomic symbol or molecular formula as follows: Radical reaction mechanisms use single-headed arrows to depict the movement of single electrons: The homolytic cleavage of the breaking bond is drawn with a "fish-hook" arrow to distinguish from the usual movement of two electrons depicted by a standard curly arrow. The second electron of the breaking bond also moves to pair up with the attacking radical electron. Radicals also take part in radical addition and radical substitution as reactive intermediates . Chain reactions involving radicals can usually be divided into three distinct processes. These are initiation , propagation , and termination . Until late in the 20th century the word "radical" was used in chemistry to indicate any connected group of atoms, such as a methyl group or a carboxyl , whether it was part of a larger molecule or a molecule on its own. A radical is often known as an R group . The qualifier "free" was then needed to specify the unbound case. Following recent nomenclature revisions, a part of a larger molecule is now called a functional group or substituent , and "radical" now implies "free". However, the old nomenclature may still appear in some books. The term radical was already in use when the now obsolete radical theory was developed. Louis-Bernard Guyton de Morveau introduced the phrase "radical" in 1785 and the phrase was employed by Antoine Lavoisier in 1789 in his Traité Élémentaire de Chimie . A radical was then identified as the root base of certain acids (the Latin word "radix" meaning "root"). Historically, the term radical in radical theory was also used for bound parts of the molecule, especially when they remain unchanged in reactions. These are now called functional groups . For example, methyl alcohol was described as consisting of a methyl "radical" and a hydroxyl "radical". Neither are radicals in the modern chemical sense, as they are permanently bound to each other, and have no unpaired, reactive electrons; however, they can be observed as radicals in mass spectrometry when broken apart by irradiation with energetic electrons. In a modern context the first organic (carbon–containing) radical identified was the triphenylmethyl radical , (C 6 H 5 ) 3 C • . This species was discovered by Moses Gomberg in 1900. In 1933 Morris S. Kharasch and Frank Mayo proposed that free radicals were responsible for anti-Markovnikov addition of hydrogen bromide to allyl bromide . [ 31 ] [ 32 ] In most fields of chemistry, the historical definition of radicals contends that the molecules have nonzero electron spin. However, in fields including spectroscopy and astrochemistry , the definition is slightly different. Gerhard Herzberg , who won the Nobel prize for his research into the electron structure and geometry of radicals, suggested a looser definition of free radicals: "any transient (chemically unstable) species (atom, molecule, or ion)". [ 33 ] The main point of his suggestion is that there are many chemically unstable molecules that have zero spin, such as C 2 , C 3 , CH 2 and so on. This definition is more convenient for discussions of transient chemical processes and astrochemistry; therefore researchers in these fields prefer to use this loose definition. [ 34 ]
https://en.wikipedia.org/wiki/Radical_(chemistry)
In chemistry , a radical clock is a chemical compound that assists in the indirect methodology to determine the kinetics of a free-radical reaction . The radical-clock compound itself reacts at a known rate, which provides a calibration for determining the rate of another reaction. Many organic mechanisms involve intermediates that cannot be identified directly but which are inferred from trapping reactions. [ 1 ] When such intermediates are radicals, their lifetimes can be deduced from radical clocks. [ 2 ] [ 3 ] An alternative, perhaps more direct approach involves generation and isolation of the intermediates by flash photolysis and pulse radiolysis , but such methods are time-consuming and require expensive equipment. With an indirect approach of radical clocks, one can still obtain relative or absolute rate constants without the need for instruments or equipment beyond those normally needed for the reaction being studied. [ 4 ] Radical clock reactions involve a competition between a unimolecular radical reaction with a known rate constant and a bimolecular radical reaction with an unknown rate constant to produce unrearranged and rearranged products. The rearrangement of an unrearranged radical, U•, proceeds to form R• (the clock reaction) with a known rate constant ( k r ). These radicals react with a trapping agent , AB, to form the unrearranged and rearranged products UA and RA, respectively. [ 5 ] The yield of the two products can be determined by gas chromatography (GC) or nuclear magnetic resonance (NMR). From the concentration of the trapping agent, the known rate constant of the radical clock, and the ratio of the products, the unknown rate constant can be indirectly established. If a chemical equilibrium exists between U• and R•, the rearranged products are dominant. [ 3 ] Because the unimolecular rearrangement reaction is first order and the bimolecular trapping reaction is second order (both irreversible), the unknown rate constant ( k R ) can be determined by: [ 6 ] The driving force behind radical clock reactions is their ability to rearrange. [ 1 ] Some common radical clocks are radical cyclizations, ring openings, and 1,2-migrations. [ 3 ] Two popular rearrangements are the cyclization of 5-hexenyl and the ring-opening of cyclopropylmethyl: [ 1 ] 5-hexenyl radical undergoes cyclization to produce a five-membered ring because this is entropically and enthalpically more favored than the six-membered ring possibility. [ 1 ] [ 3 ] The rate-constant for this reaction is 2.3×10 5 s −1 at 298 K. [ 5 ] Cyclopropylmethyl radical undergoes a very rapid ring opening rearrangement that relieves the ring strain and is enthalpically favorable. [ 1 ] [ 3 ] The rate-constant for this reaction is 8.6×10 7 s −1 at 298 K. [ 7 ] In order to determine absolute rate constants for radical reactions, unimolecular clock reactions need to be calibrated for each group of radicals such as primary alkyls over a range of time. [ 3 ] Through the usage of EPR spectroscopy , the absolute rate constants for unimolecular reactions can be measured with a variety of temperatures. [ 3 ] [ 4 ] The Arrhenius equation can then be applied to calculate the rate constant for a specific temperature at which the radical clock reactions are conducted. When using a radical clock to study a reaction, there is an implicit assumption that the rearrangement rate of the radical clock is the same as when the rate of that rearrangement reaction rate is determined. A theoretical study of the rearrangement reactions of cyclobutylmethyl and of 5-hexenyl in a variety of solvents found that their reaction rates were only very slightly affected by the nature of the solvent. [ 5 ] The rates of radical clocks can be adjusted to increase or decrease by what types of substituents are attached to the radical clock. In the figure below, the rates of the radical clocks are shown with a variety of substituents attached to the clock. [ 1 ] [ failed verification ] By selecting among the general classes of radical clocks and the specific substituents on them, one can be chosen with a rate-constant suitable for studying reactions having a wide range of rates. Reactions having rates ranging from 10 −1 to 10 12 M −1 s −1 have been studied using radical clocks. [ 2 ] Radical clocks are used in reduction of alkyl halides with sodium naphthalenide , reaction [ clarification needed ] of enones , the Wittig rearrangement , [ 8 ] reductive elimination reactions of dialkylmercury compounds, dioxirane dihydroxylations , and electrophilic fluorinations . [ 3 ]
https://en.wikipedia.org/wiki/Radical_clock
Radical cyclization reactions are organic chemical transformations that yield cyclic products through radical intermediates. They usually proceed in three basic steps: selective radical generation, radical cyclization, and conversion of the cyclized radical to product. [ 1 ] Radical cyclization reactions produce mono- or polycyclic products through the action of radical intermediates. Because they are intramolecular transformations, they are often very rapid and selective. Selective radical generation can be achieved at carbons bound to a variety of functional groups , and reagents used to effect radical generation are numerous. The radical cyclization step usually involves the attack of a radical on a multiple bond. After this step occurs, the resulting cyclized radicals are quenched through the action of a radical scavenger , a fragmentation process, or an electron-transfer reaction. Five- and six-membered rings are the most common products; formation of smaller and larger rings is rarely observed. Three conditions must be met for an efficient radical cyclization to take place: Advantages: because radical intermediates are not charged species, reaction conditions are often mild and functional group tolerance is high and orthogonal to that of many polar processes. Reactions can be carried out in a variety of solvents (including arenes, alcohols, and water), as long as the solvent does not have a weak bond that can undergo abstraction, and products are often synthetically useful compounds that can be carried on using existing functionality or groups introduced during radical trapping . Disadvantages: the relative rates of the various stages of radical cyclization reactions (and any side reactions) must be carefully controlled so that cyclization and trapping of the cyclized radical is favored. Side reactions are sometimes a problem, and cyclization is especially slow for small and large rings (although macrocyclizations , which resemble intermolecular radical reactions, are often high yielding). Because many reagents exist for radical generation and trapping, establishing a single prevailing mechanism is not possible. However, once a radical is generated, it can react with multiple bonds in an intramolecular fashion to yield cyclized radical intermediates. The two ends of the multiple bond constitute two possible sites of reaction. If the radical in the resulting intermediate ends up outside of the ring, the attack is termed "exo"; if it ends up inside the newly formed ring, the attack is called "endo." In many cases, exo cyclization is favored over endo cyclization (macrocyclizations constitute the major exception to this rule). 5-hexenyl radicals are the most synthetically useful intermediates for radical cyclizations, because cyclization is extremely rapid and exo selective. [ 3 ] Although the exo radical is less thermodynamically stable than the endo radical, the more rapid exo cyclization is rationalized by better orbital overlap in the chair-like exo transition state (see below). (1) Substituents that affect the stability of these transition states can have a profound effect on the site selectivity of the reaction. Carbonyl substituents at the 2-position, for instance, encourage 6-endo ring closure. Alkyl substituents at positions 2, 3, 4, or 6 enhance selectivity for 5-exo closure. Cyclization of the homologous 6-heptenyl radical is still selective, but is much slower—as a result, competitive side reactions are an important problem when these intermediates are involved. Additionally, 1,5-shifts can yield stabilized allylic radicals at comparable rates in these systems. In 6-hexenyl radical substrates, polarization of the reactive double bond with electron-withdrawing functional groups is often necessary to achieve high yields. [ 4 ] Stabilizing the initially formed radical with electron-withdrawing groups provides access to more stable 6-endo cyclization products preferentially. (2) Cyclization reactions of vinyl, aryl, and acyl radicals are also known. Under conditions of kinetic control , 5-exo cyclization takes place preferentially. However, low concentrations of a radical scavenger establish thermodynamic control and provide access to 6-endo products—not via 6-endo cyclization, but by 5-exo cyclization followed by 3-exo closure and subsequent fragmentation (Dowd-Beckwith rearrangement). Whereas at high concentrations of the exo product is rapidly trapped preventing subsequent rearrangement to the endo product [ 5 ] Aryl radicals exhibit similar reactivity. (3) Cyclization can involve heteroatom-containing multiple bonds such as nitriles , oximes , and carbonyls . Attack at the carbon atom of the multiple bond is almost always observed. [ 6 ] [ 7 ] [ 8 ] In the latter case attack is reversible; however alkoxy radicals can be trapped using a stannane trapping agent. The diastereoselectivity of radical cyclizations is often high. In most all-carbon cases, selectivity can be rationalized according to Beckwith's guidelines, which invoke the reactant-like, exo transition state shown above. [ 9 ] Placing substituents in pseudoequatorial positions in the transition state leads to cis products from simple secondary radicals. Introducing polar substituents can favor trans products due to steric or electronic repulsion between the polar groups. In more complex systems, the development of transition state models requires consideration of factors such as allylic strain and boat-like transition states [ 10 ] (4) Chiral auxiliaries have been used in enantioselective radical cyclizations with limited success. [ 11 ] Small energy differences between early transition states constitute a profound barrier to success in this arena. In the example shown, diastereoselectivity (for both configurations of the left-hand stereocenter) is low and enantioselectivity is only moderate. (5) Substrates with stereocenters between the radical and multiple bond are often highly stereoselective. Radical cyclizations to form polycyclic products often take advantage of this property. [ 12 ] The use of metal hydrides ( tin , silicon and mercury hydrides) is common in radical cyclization reactions; the primary limitation of this method is the possibility of reduction of the initially formed radical by H-M. Fragmentation methods avoid this problem by incorporating the chain-transfer reagent into the substrate itself—the active chain-carrying radical is not released until after cyclization has taken place. The products of fragmentation methods retain a double bond as a result, and extra synthetic steps are usually required to incorporate the chain-carrying group. Atom-transfer methods rely on the movement of an atom from the acyclic starting material to the cyclic radical to generate the product. [ 13 ] [ 14 ] These methods use catalytic amounts of weak reagents, preventing problems associated with the presence of strong reducing agents (such as tin hydride). Hydrogen- and halogen-transfer processes are known; the latter tend to be more synthetically useful. (6) Oxidative [ 15 ] and reductive [ 16 ] cyclization methods also exist. These procedures require fairly electrophilic and nucleophilic radicals, respectively, to proceed effectively. Cyclic radicals are either oxidized or reduced and quenched with either external or internal nucleophiles or electrophiles, respectively. In general, radical cyclization to produce small rings is difficult. However, it is possible to trap the cyclized radical before re-opening. This process can be facilitated by fragmentation (see the three-membered case below) or by stabilization of the cyclized radical (see the four-membered case). Five- and six-membered rings are the most common sizes produced by radical cyclization. (7) Polycycles and macrocycles can also be formed using radical cyclization reactions. In the former case, rings can be pre-formed and a single ring closed with radical cyclization, or multiple rings can be formed in a tandem process (as below). [ 17 ] Macrocyclizations, which lack the FMO requirement of cyclizations of smaller substrates, have the unique property of exhibiting endo selectivity. (8) In comparison to cationic cyclizations, radical cyclizations avoid issues associated with Wagner-Meerwein rearrangements , do not require strongly acidic conditions, and can be kinetically controlled. Cationic cyclizations are usually thermodynamically controlled. Radical cyclizations are much faster than analogous anionic cyclizations, and avoid β-elimination side reactions. Anionic Michael -type cyclization is an alternative to radical cyclization of activated olefins. Metal-catalyzed cyclization reactions usually require mildly basic conditions, and substrates must be chosen to avoid β-hydride elimination. The primary limitation of radical cyclizations with respect to these other methods is the potential for radical side reactions. Radical reactions must be carried out under inert atmosphere as dioxygen is a triplet radical which will intercept radical intermediates. Because the relative rates of a number of processes are important to the reaction, concentrations must be carefully adjusted to optimize reaction conditions. Reactions are generally carried out in solvents whose bonds have high bond dissociation energies (BDEs), including benzene, methanol or benzotrifluoride. Even aqueous conditions are tolerated, [ 18 ] since water has a strong O-H bond with a BDE of 494 kJ/mol. This is in contrast to many polar processes, where hydroxylic solvents (or polar X-H bonds in the substrate itself) may not be tolerated due to the nucleophilicity or acidity of the functional group. (9) A mixture of bromo acetal 1 (549 mg, 1.78 mmol), AIBN (30.3 mg, 0.185 mmol), and Bu 3 SnH (0.65 mL, 2.42 mmol) in dry benzene (12 mL) was heated under reflux for 1 hour and then evaporated under reduced pressure. Silicagel column chromatography of the crude product with hexane – EtOAc (92:8) as eluant gave tetrahydropyran 2 (395 mg, 97%) as an oily mixture of two diastereomers. (c 0.43, CHCl 3 ); IR ( CHCl 3 ):1732 cm–1;1H NMR (CDCl 3 )δ 4.77–4.89 (m, 0.6H), 4.66–4.69 (m, 0.4H), 3.40–4.44 (m, 4H), 3.68 (s, 3H), 2.61 (dd, J = 15.2, 4.2 Hz, 1H), 2.51 (dd, J = 15.2, 3.8 Hz, 1H), 0.73–1.06 (m, 3H); mass spectrum : m/z 215 (M+–Me); Anal. Calcd for C 12 H 22 O 4 : C, 62.6; H, 9.65. Found: C, 62.6; H, 9.7. [ 19 ]
https://en.wikipedia.org/wiki/Radical_cyclization
Radical disproportionation encompasses a group of reactions in organic chemistry in which two radicals react to form two different non-radical products. Radicals in chemistry are defined as reactive atoms or molecules that contain an unpaired electron or electrons in an open shell. The unpaired electrons can cause radicals to be unstable and reactive. Reactions in radical chemistry can generate both radical and non-radical products . Radical disproportionation reactions can occur with many radicals in solution and in the gas phase . Due to the reactive nature of radical molecules, disproportionation proceeds rapidly and requires little to no activation energy . [ 1 ] The most thoroughly studied radical disproportionation reactions have been conducted with alkyl radicals, but there are many organic molecules that can exhibit more complex, multi-step disproportionation reactions. In radical disproportionation reactions one molecule acts as an acceptor while the other molecule acts as a donor. [ 2 ] In the most common disproportionation reactions, a hydrogen atom is taken, or abstracted by the acceptor as the donor molecule undergoes an elimination reaction to form a double bond . [ 3 ] Other atoms such as halogens may also be abstracted during a disproportionation reaction. [ 4 ] Abstraction occurs as a head to tail reaction with the atom that is being abstracted facing the radical atom on the other molecule. Radical disproportionation is often thought of as occurring in a linear fashion with the donor radical, the acceptor radical, and the atom being accepted all along the same axis. In fact, most disproportionation reactions do not require linear orientations in space. [ 2 ] Molecules that are more sterically hindered require arrangements that are more linear, and thus react more slowly. Steric effects play a significant role in disproportionation with ethyl radicals acting as more effective acceptors than tert-butyl radicals. [ 5 ] Tert-butyl radicals have many hydrogens on adjacent carbons to donate and steric effects often prevent tert-butyl radicals from getting close to abstracting hydrogens. [ 6 ] Alkyl radical disproportionation has been studied extensively in scientific literature. [ 6 ] During alkyl radical disproportionation, an alkane and an alkene are the end products and the bond order of the products increases by one over the reactants. [ 1 ] Thus the reaction is exothermic (ΔH = 50–95 kcal/mol (210–400 kJ/mol)) and proceeds rapidly. [ 6 ] Cross disproportionation occurs when two different alkyl radicals disproportionate to form two new products. Different products can be formed depending on which alkyl radical acts as a donor and which acts as an acceptor. The efficiency of primary and secondary alkyl radicals as donors depends on the steric effects and configuration of the radical acceptors. [ 3 ] Another reaction that can sometimes occur instead of disproportionation is recombination. [ 6 ] During recombination, two radicals form one new non-radical product and one new bond. Similar to disproportionation, the recombination reaction is exothermic and requires little to no activation energy. The ratio of the rates of disproportionation to recombination is referred to as k D /k C and often favors recombination compared with disproportionation for alkyl radicals. As the number of transferable hydrogens increase, the rate constant for disproportionation increases relative to the rate constant for recombination. [ 3 ] When the hydrogen atoms in an alkyl radical are displaced with deuterium , disproportionation proceeds at a slightly slower rate whereas the rate of recombination remains the same. Thus disproportionation is weakly affected by the kinetic isotope effect with k H /k D = 1.20 ± 0.15 for ethylene. [ 7 ] Hydrogens and deuterons are not involved in recombination reactions. However, deuteron abstraction during disproportionation occurs more slowly than hydrogen abstraction due to the increased mass and reduced vibrational energy of deuterium, although the experimentally observed k H /k D is close to one. Alkoxy radicals which contain unpaired electrons on an oxygen atom display a higher k D /k C compared to alkyl radicals. The oxygen has a partial negative charge which removes electron density from the donor carbon atom thereby facilitating hydrogen abstraction. The rate of disproportionation is also aided by the more electronegative oxygen on the acceptor molecule. [ 6 ] Many radical processes involve chain reactions or chain propagation with disproportionation and recombination occurring in the terminal step of the reaction. [ 8 ] Terminating chain propagation is often most significant during polymerization as the desired chain propagation cannot take place if disproportionation and recombination reactions readily occur. [ 8 ] Controlling termination products and regulating disproportionation and recombination reactions in the terminal step are important considerations in radical chemistry and polymerization . In some reactions (such as the one shown below) one or both of the termination pathways can be hindered by steric or solvent effects . [ 9 ] Many polymer chemists are concerned with limiting the rate of disproportionation during polymerization. Although disproportionation results in formation of one new double bond which may react with the polymer chain, a saturated hydrocarbon is also formed, and thus the chain reaction does not readily proceed. [ 10 ] During living free radical polymerization , termination pathways for a growing polymer chain are removed. This can be achieved through several methods, one of which is reversible termination with stable radicals. Nitroxide radicals and other stable radicals reduce recombination and disproportionation rates and control the concentration of polymeric radicals. [ 11 ]
https://en.wikipedia.org/wiki/Radical_disproportionation
In mathematics and more specifically in field theory , A radical extension of a field K {\displaystyle K} is a field extension obtained by a tower of field extensions, each generated by adjoining an nth root of an element from the previous field. A simple radical extension is a simple extension F / K generated by a single element α {\displaystyle \alpha } satisfying α n = b {\displaystyle \alpha ^{n}=b} for an element b of K . In characteristic p , we also take an extension by a root of an Artin–Schreier polynomial to be a simple radical extension. A radical series is a tower K = F 0 < F 1 < ⋯ < F k {\displaystyle K=F_{0}<F_{1}<\cdots <F_{k}} where each extension F i / F i − 1 {\displaystyle F_{i}/F_{i-1}} is a simple radical extension. In this case, the field extension F k / K {\displaystyle F_{k}/K} is called a radical extension . Radical extensions occur naturally when solving polynomial equations in radicals . In fact a solution in radicals is the expression of the solution as an element of a radical series: a polynomial f over a field K is said to be solvable by radicals if there is a splitting field of f over K contained in a radical extension of K . The Abel–Ruffini theorem states that such a solution by radicals does not exist, in general, for equations of degree at least five. Évariste Galois showed that an equation is solvable in radicals if and only if its Galois group is solvable . The proof is based on the fundamental theorem of Galois theory and the following theorem. Let K be a field containing n distinct n th roots of unity . An extension of K of degree n is a radical extension generated by an n th root of an element of K if and only if it is a Galois extension whose Galois group is a cyclic group of order n . The proof is related to Lagrange resolvents . Let ω {\displaystyle \omega } be a primitive n th root of unity (belonging to K ). If the extension is generated by α {\displaystyle \alpha } with x n − a {\displaystyle x^{n}-a} as a minimal polynomial , the mapping α ↦ ω α {\displaystyle \alpha \mapsto \omega \alpha } induces a K -automorphism of the extension that generates the Galois group, showing the "only if" implication. Conversely, if ϕ {\displaystyle \phi } is a K -automorphism generating the Galois group, and β {\displaystyle \beta } is a generator of the extension, let The relation ϕ ( α ) = ω α {\displaystyle \phi (\alpha )=\omega \alpha } implies that the product of the conjugates of α {\displaystyle \alpha } (that is the images of α {\displaystyle \alpha } by the K -automorphisms) belongs to K , and is equal to the product of α n {\displaystyle \alpha ^{n}} by the product of the n th roots of unit. As the product of the n th roots of units is ± 1 {\displaystyle \pm 1} , this implies that α n ∈ K , {\displaystyle \alpha ^{n}\in K,} and thus that the extension is a radical extension. It follows from this theorem that a Galois extension may be extended to a radical extension if and only if its Galois group is solvable (but there are non-radical Galois extensions whose Galois group is solvable, for example Q ( cos ⁡ ( 2 π / 7 ) ) / Q {\textstyle \mathbb {Q} (\cos(2\pi /7))/\mathbb {Q} } ). This is, in modern terminology, the criterion of solvability by radicals that was provided by Galois. The proof uses the fact that the Galois closure of a simple radical extension of degree n is the extension of it by a primitive n th root of unity, and that the Galois group of the n th roots of unity is cyclic.
https://en.wikipedia.org/wiki/Radical_extension
In polymer chemistry , radical polymerization ( RP ) is a method of polymerization by which a polymer forms by the successive addition of a radical to building blocks ( repeat units ). Radicals can be formed by a number of different mechanisms, usually involving separate initiator molecules . Following its generation, the initiating radical adds (nonradical) monomer units, thereby growing the polymer chain. Radical polymerization is a key synthesis route for obtaining a wide variety of different polymers and materials composites . The relatively non-specific nature of radical chemical interactions makes this one of the most versatile forms of polymerization available and allows facile reactions of polymeric radical chain ends and other chemicals or substrates. In 2001, 40 billion of the 110 billion pounds of polymers produced in the United States were produced by radical polymerization. [ 1 ] Radical polymerization is a type of chain polymerization , along with anionic , cationic and coordination polymerization . Initiation is the first step of the polymerization process. During initiation, an active center is created from which a polymer chain is generated. Not all monomers are susceptible to all types of initiators. Radical initiation works best on the carbon–carbon double bond of vinyl monomers and the carbon–oxygen double bond in aldehydes and ketones . [ 1 ] Initiation has two steps. In the first step, one or two radicals are created from the initiating molecules. In the second step, radicals are transferred from the initiator molecules to the monomer units present. Several choices are available for these initiators. Due to side reactions, not all radicals formed by the dissociation of initiator molecules actually add monomers to form polymer chains. The efficiency factor f is defined as the fraction of the original initiator which contributes to the polymerization reaction. The maximal value of f is 1, but typical values range from 0.3 to 0.8. [ 7 ] The following types of reactions can decrease the efficiency of the initiator. During polymerization, a polymer spends most of its time in increasing its chain length, or propagating. After the radical initiator is formed, it attacks a monomer (Figure 11). [ 8 ] In an ethene monomer, one electron pair is held securely between the two carbons in a sigma bond . The other is more loosely held in a pi bond . The free radical uses one electron from the pi bond to form a more stable bond with the carbon atom. The other electron returns to the second carbon atom, turning the whole molecule into another radical. This begins the polymer chain. Figure 12 shows how the orbitals of an ethylene monomer interact with a radical initiator. [ 9 ] Once a chain has been initiated, the chain propagates (Figure 13) until there are no more monomers ( living polymerization ) or until termination occurs. There may be anywhere from a few to thousands of propagation steps depending on several factors such as radical and chain reactivity, the solvent, and temperature. [ 10 ] [ 11 ] The mechanism of chain propagation is as follows: Chain termination is inevitable in radical polymerization due to the high reactivity of radicals. Termination can occur by several different mechanisms. If longer chains are desired, the initiator concentration should be kept low; otherwise, many shorter chains will result. [ 2 ] Contrary to the other modes of termination, chain transfer results in the destruction of only one radical, but also the creation of another radical. Often, however, this newly created radical is not capable of further propagation. Similar to disproportionation , all chain-transfer mechanisms also involve the abstraction of a hydrogen or other atom. There are several types of chain-transfer mechanisms. [ 2 ] Effects of chain transfer: The most obvious effect of chain transfer is a decrease in the polymer chain length. If the rate of transfer is much larger than the rate of propagation, then very small polymers are formed with chain lengths of 2-5 repeating units ( telomerization ). [ 13 ] The Mayo equation estimates the influence of chain transfer on chain length ( x n ): 1 x n = ( 1 x n ) o + k t r [ s o l v e n t ] k p [ m o n o m e r ] {\displaystyle {\frac {1}{x_{n}}}=\left({\frac {1}{x_{n}}}\right)_{o}+{\frac {k_{tr}[solvent]}{k_{p}[monomer]}}} . Where k tr is the rate constant for chain transfer and k p is the rate constant for propagation. The Mayo equation assumes that transfer to solvent is the major termination pathway. [ 2 ] [ 14 ] There are four industrial methods of radical polymerization: [ 2 ] Other methods of radical polymerization include the following: Also known as living radical polymerization , controlled radical polymerization, reversible deactivation radical polymerization (RDRP) relies on completely pure reactions, preventing termination caused by impurities. Because these polymerizations stop only when there is no more monomer, polymerization can continue upon the addition of more monomer. Block copolymers can be made this way. RDRP allows for control of molecular weight and dispersity. However, this is very difficult to achieve and instead a pseudo-living polymerization occurs with only partial control of molecular weight and dispersity. [ 15 ] ATRP and RAFT are the main types of complete radical polymerization. In typical chain growth polymerizations, the reaction rates for initiation, propagation and termination can be described as follows: where f is the efficiency of the initiator and k d , k p , and k t are the constants for initiator dissociation, chain propagation and termination, respectively. [I] [M] and [M•] are the concentrations of the initiator, monomer and the active growing chain. Under the steady-state approximation , the concentration of the active growing chains remains constant, i.e. the rates of initiation and of termination are equal. The concentration of active chain can be derived and expressed in terms of the other known species in the system. In this case, the rate of chain propagation can be further described using a function of the initiator and monomer concentrations [ 20 ] [ 21 ] The kinetic chain length v is a measure of the average number of monomer units reacting with an active center during its lifetime and is related to the molecular weight through the mechanism of the termination. Without chain transfer, the kinetic chain length is only a function of propagation rate and initiation rate. [ 22 ] Assuming no chain-transfer effect occurs in the reaction, the number average degree of polymerization P n can be correlated with the kinetic chain length. In the case of termination by disproportionation, one polymer molecule is produced per every kinetic chain: Termination by combination leads to one polymer molecule per two kinetic chains: [ 20 ] Any mixture of both these mechanisms can be described by using the value δ , the contribution of disproportionation to the overall termination process: If chain transfer is considered, the kinetic chain length is not affected by the transfer process because the growing free-radical center generated by the initiation step stays alive after any chain-transfer event, although multiple polymer chains are produced. However, the number average degree of polymerization decreases as the chain transfers, since the growing chains are terminated by the chain-transfer events. Taking into account the chain-transfer reaction towards solvent S , initiator I , polymer P , and added chain-transfer agent T . The equation of P n will be modified as follows: [ 23 ] It is usual to define chain-transfer constants C for the different molecules In chain growth polymerization, the position of the equilibrium between polymer and monomers can be determined by the thermodynamics of the polymerization. The Gibbs free energy (ΔG p ) of the polymerization is commonly used to quantify the tendency of a polymeric reaction. The polymerization will be favored if ΔG p < 0; if ΔG p > 0, the polymer will undergo depolymerization . According to the thermodynamic equation ΔG = ΔH – TΔS, a negative enthalpy and an increasing entropy will shift the equilibrium towards polymerization. In general, the polymerization is an exothermic process, i.e. negative enthalpy change, since addition of a monomer to the growing polymer chain involves the conversion of π bonds into σ bonds, or a ring–opening reaction that releases the ring tension in a cyclic monomer. Meanwhile, during polymerization, a large amount of small molecules are associated, losing rotation and translational degrees of freedom . As a result, the entropy decreases in the system, ΔS p < 0 for nearly all polymerization processes. Since depolymerization is almost always entropically favored, the ΔH p must then be sufficiently negative to compensate for the unfavorable entropic term. Only then will polymerization be thermodynamically favored by the resulting negative ΔG p . In practice, polymerization is favored at low temperatures: TΔS p is small. Depolymerization is favored at high temperatures: TΔS p is large. As the temperature increases, ΔG p become less negative. At a certain temperature, the polymerization reaches equilibrium (rate of polymerization = rate of depolymerization). This temperature is called the ceiling temperature (T c ). ΔG p = 0. [ 24 ] The stereochemistry of polymerization is concerned with the difference in atom connectivity and spatial orientation in polymers that has the same chemical composition. Hermann Staudinger studied the stereoisomerism in chain polymerization of vinyl monomers in the late 1920s, and it took another two decades for people to fully appreciate the idea that each of the propagation steps in the polymer growth could give rise to stereoisomerism. The major milestone in the stereochemistry was established by Ziegler and Natta and their coworkers in 1950s, as they developed metal based catalyst to synthesize stereoregular polymers. The reason why the stereochemistry of the polymer is of particular interest is because the physical behavior of a polymer depends not only on the general chemical composition but also on the more subtle differences in microstructure . [ 25 ] Atactic polymers consist of a random arrangement of stereochemistry and are amorphous (noncrystalline), soft materials with lower physical strength. The corresponding isotactic (like substituents all on the same side) and syndiotactic (like substituents of alternate repeating units on the same side) polymers are usually obtained as highly crystalline materials. It is easier for the stereoregular polymers to pack into a crystal lattice since they are more ordered and the resulting crystallinity leads to higher physical strength and increased solvent and chemical resistance as well as differences in other properties that depend on crystallinity. The prime example of the industrial utility of stereoregular polymers is polypropene . Isotactic polypropene is a high-melting (165 °C), strong, crystalline polymer, which is used as both a plastic and fiber. Atactic polypropene is an amorphous material with an oily to waxy soft appearance that finds use in asphalt blends and formulations for lubricants, sealants, and adhesives, but the volumes are minuscule compared to that of isotactic polypropene. When a monomer adds to a radical chain end, there are two factors to consider regarding its stereochemistry: 1) the interaction between the terminal chain carbon and the approaching monomer molecule and 2) the configuration of the penultimate repeating unit in the polymer chain. [ 4 ] The terminal carbon atom has sp 2 hybridization and is planar. Consider the polymerization of the monomer CH 2 =CXY. There are two ways that a monomer molecule can approach the terminal carbon: the mirror approach (with like substituents on the same side) or the non-mirror approach (like substituents on opposite sides). If free rotation does not occur before the next monomer adds, the mirror approach will always lead to an isotactic polymer and the non-mirror approach will always lead to a syndiotactic polymer (Figure 25). [ 4 ] However, if interactions between the substituents of the penultimate repeating unit and the terminal carbon atom are significant, then conformational factors could cause the monomer to add to the polymer in a way that minimizes steric or electrostatic interaction (Figure 26). [ 4 ] Traditionally, the reactivity of monomers and radicals are assessed by the means of copolymerization data. The Q–e scheme, the most widely used tool for the semi-quantitative prediction of monomer reactivity ratios , was first proposed by Alfrey and Price in 1947. [ 26 ] The scheme takes into account the intrinsic thermodynamic stability and polar effects in the transition state . A given radical M i o {\displaystyle M_{i}^{o}} and a monomer M j {\displaystyle M_{j}} are considered to have intrinsic reactivities P i and Q j , respectively. [ 27 ] The polar effects in the transition state, the supposed permanent electric charge carried by that entity (radical or molecule), is quantified by the factor e , which is a constant for a given monomer, and has the same value for the radical derived from that specific monomer. For addition of monomer 2 to a growing polymer chain whose active end is the radical of monomer 1, the rate constant, k 12 , is postulated to be related to the four relevant reactivity parameters by The monomer reactivity ratio for the addition of monomers 1 and 2 to this chain is given by [ 27 ] [ 28 ] For the copolymerization of a given pair of monomers, the two experimental reactivity ratios r 1 and r 2 permit the evaluation of (Q 1 /Q 2 ) and (e 1 – e 2 ). Values for each monomer can then be assigned relative to a reference monomer, usually chosen as styrene with the arbitrary values Q = 1.0 and e = –0.8. [ 28 ] Free radical polymerization has found applications including the manufacture of polystyrene , thermoplastic block copolymer elastomers, [ 29 ] cardiovascular stents , [ 30 ] chemical surfactants [ 31 ] and lubricants. Block copolymers are used for a wide variety of applications including adhesives, footwear and toys. Free radical polymerization allows the functionalization of carbon nanotubes . [ 32 ] CNTs intrinsic electronic properties lead them to form large aggregates in solution, precluding useful applications. Adding small chemical groups to the walls of CNT can eliminate this propensity and tune the response to the surrounding environment. The use of polymers instead of smaller molecules can modify CNT properties (and conversely, nanotubes can modify polymer mechanical and electronic properties). [ 29 ] For example, researchers coated carbon nanotubes with polystyrene by first polymerizing polystyrene via chain radical polymerization and subsequently mixing it at 130 °C with carbon nanotubes to generate radicals and graft them onto the walls of carbon nanotubes (Figure 27). [ 33 ] Chain growth polymerization ("grafting to") synthesizes a polymer with predetermined properties. Purification of the polymer can be used to obtain a more uniform length distribution before grafting. Conversely, “grafting from”, with radical polymerization techniques such as atom transfer radical polymerization (ATRP) or nitroxide-mediated polymerization (NMP), allows rapid growth of high molecular weight polymers. Radical polymerization also aids synthesis of nanocomposite hydrogels . [ 34 ] These gels are made of water-swellable nano-scale clay (especially those classed as smectites ) enveloped by a network polymer . Aqueous dispersions of clay are treated with an initiator and a catalyst and the organic monomer, generally an acrylamide . Polymers grow off the initiators that are in turn bound to the clay. Due to recombination and disproportionation reactions, growing polymer chains bind to one another, forming a strong, cross-linked network polymer, with clay particles acting as branching points for multiple polymer chain segments. [ 35 ] Free radical polymerization used in this context allows the synthesis of polymers from a wide variety of substrates (the chemistries of suitable clays vary). Termination reactions unique to chain growth polymerization produce a material with flexibility, mechanical strength and biocompatibility.
https://en.wikipedia.org/wiki/Radical_polymerization
In mathematics , in the realm of abstract algebra , a radical polynomial is a multivariate polynomial [ 1 ] over a field that can be expressed as a polynomial in the sum of squares of the variables. That is, if is a polynomial ring , the ring of radical polynomials is the subring generated by the polynomial [ 2 ] Radical polynomials are characterized as precisely those polynomials that are invariant under the action of the orthogonal group . The ring of radical polynomials is a graded subalgebra of the ring of all polynomials. The standard separation of variables theorem asserts that every polynomial can be expressed as a finite sum of terms, each term being a product of a radical polynomial and a harmonic polynomial . This is equivalent to the statement that the ring of all polynomials is a free module over the ring of radical polynomials. This polynomial -related article is a stub . You can help Wikipedia by expanding it . This abstract algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radical_polynomial
In organic chemistry , a radical-substitution reaction is a substitution reaction involving free radicals as a reactive intermediate . [ 1 ] The reaction always involves at least two steps, and possibly a third. In the first step called initiation ( 2 , 3 ), a free radical is created by homolysis . Homolysis can be brought about by heat or ultraviolet light , but also by radical initiators such as organic peroxides or azo compounds . UV Light is used to create two free radicals from one diatomic species. The final step is called termination ( 6 , 7 ), in which the radical recombines with another radical species. If the reaction is not terminated, but instead the radical group(s) go on to react further, the steps where new radicals are formed and then react are collectively known as propagation ( 4 , 5 ). This is because a new radical is created, able to participate in secondary reactions. In free radical halogenation reactions, radical substitution takes place with halogen reagents and alkane substrates. Another important class of radical substitutions involve aryl radicals . One example is the hydroxylation of benzene by Fenton's reagent . Many oxidation and reduction reactions in organic chemistry have free radical intermediates , for example the oxidation of aldehydes to carboxylic acids with chromic acid . Coupling reactions can also be considered radical substitutions. Certain aromatic substitutions takes place by radical-nucleophilic aromatic substitution . Auto-oxidation is a process responsible for deterioration of paints and food, as well as production of certain lab hazards such as diethyl ether peroxide . More radical substitutions are listed below:
https://en.wikipedia.org/wiki/Radical_substitution
In mathematics, the radical symbol , radical sign , root symbol , or surd is a symbol for the square root or higher-order root of a number. The square root of a number x is written as while the n th root of x is written as It is also used for other meanings in more advanced mathematics, such as the radical of an ideal . In linguistics , the symbol is used to denote a root word . Each positive real number has two square roots, one positive and the other negative. The radical symbol refers to the principal value of the square root function called the principal square root, which is the positive one. The two square roots of a negative number are both imaginary numbers , and the square root symbol refers to the principal square root, the one with a positive imaginary part. For the definition of the principal square root of other complex numbers , see Square root § Principal square root of a complex number . The origin of the root symbol √ is largely speculative. Some sources imply that the symbol was first used by Arab mathematicians. One of those mathematicians was Abū al-Hasan ibn Alī al-Qalasādī (1421–1486). Legend has it that it was taken from the Arabic letter " ج " ( ǧīm ), which is the first letter in the Arabic word " جذر " ( jadhir , meaning "root"). [ 1 ] However, Leonhard Euler [ 2 ] believed it originated from the letter "r", the first letter of the Latin word " radix " (meaning "root"), referring to the same mathematical operation . The symbol was first seen in print without the vinculum (the horizontal "bar" over the numbers inside the radical symbol) in the year 1525 in Die Coss by Christoff Rudolff , a German mathematician. In 1637 Descartes was the first to unite the German radical sign √ with the vinculum to create the radical symbol in common use today. [ 3 ] The Unicode and HTML character codes for the radical symbols are: However, these characters differ in appearance from most mathematical typesetting by omitting the overline connected to the radical symbol, which surrounds the argument of the square root function. The OpenType math table allows adding this overline following the radical symbol. The Symbol font displays the character without any vinculum whatsoever; the overline may be a separate character at 0x60. [ 4 ] The JIS, [ 5 ] Wansung [ 6 ] and CNS 11643 [ 7 ] [ 8 ] code charts include a short overline attached to the radical symbol, whereas the GB 2312 [ 9 ] and GB 18030 charts do not. [ 10 ] Additionally a "Radical Symbol Bottom" (U+23B7, ⎷) is available in the Miscellaneous Technical block. [ 11 ] This was used in contexts where box-drawing characters are used, such as in the technical character set of DEC terminals, to join up with box drawing characters on the line above to create the vinculum. [ 12 ] In LaTeX the square root symbol may be generated by the \sqrt macro, [ 13 ] and the square root symbol without the overline may be generated by the \surd macro. [ 14 ] Legacy encodings of the square root character U+221A include:
https://en.wikipedia.org/wiki/Radical_symbol
Radical theory is an obsolete scientific theory in chemistry describing the structure of organic compounds . The theory was pioneered by Justus von Liebig , Friedrich Wöhler and Auguste Laurent around 1830 and is not related to the modern understanding of free radicals . [ 1 ] [ 2 ] In this theory, organic compounds were thought to exist as combinations of radicals that could be exchanged in chemical reactions just as chemical elements could be interchanged in inorganic compounds. The term radical was already in use when radical theory was developed. Louis-Bernard Guyton de Morveau introduced the phrase "radical" in 1785 and the phrase was employed by Antoine Lavoisier in 1789 in his Traité Élémentaire de Chimie . A radical was identified as the root base of certain acids (The Latin word "radix" meaning "root"). The combination of a radical with oxygen would result in an acid. For example the radical of acetic acid was called "acetic" and that of muriatic acid ( hydrochloric acid ) was called "muriatic". Joseph Louis Gay-Lussac found evidence for the cyanide radical in 1815 in his work on hydrogen cyanide and a number of cyanide salts he discovered. He also isolated cyanogen ((CN) 2 ) not realizing that cyanogen is the cyanide dimer NC-CN. Jean-Baptiste Dumas proposed the ethylene radical from investigations into diethyl ether and ethanol . In his Etherin theory [ 3 ] he observed that ether consisted of two equivalents of ethylene and one equivalent of water and that ethylene and ethanol could interconvert in chemical reactions. Ethylene was also the base fragment for a number of other compounds such as ethyl acetate . This Etherin theory was eventually abandoned by Dumas in favor of radical theory. As a radical it should react with an oxide to form the hydrate but it was found that ethylene is resistant to an oxide like calcium oxide . Henri Victor Regnault in 1834 reacted ethylene dichloride (CH 2 CH 2 .Cl 2 ) with KOH forming vinyl chloride , water, and KCl. [ 4 ] In etherin theory it should not be possible to break up the ethylene fragment in this way. Radical theory replaced electrochemical dualism which stated that all molecules were to be considered as salts composed of basic and acidic oxides. Liebig and Wöhler observed in 1832 [ 5 ] in an investigation of benzoin resin ( benzoic acid ) that the compounds almond oil ( benzaldehyde ), " Benzoestoff " ( benzyl alcohol ), benzoyl chloride and benzamide all share a common C 7 H 5 O fragment and that these compounds could all be synthesized from almond oil by simple substitutions. The C 7 H 5 O fragment was considered a "radical of benzoic acid" and called benzoyl . Organic radicals were thus placed on the same level as the inorganic elements. Just like the inorganic elements ( simple radicals ) the organic radicals ( compound radicals ) were indivisible. The theory was developed thanks to improvements in elemental analysis by von Liebig. Laurent contributed to the theory by reporting the isolation of benzoyl itself in 1835, [ 6 ] however the isolated chemical is today recognised at its dimer dibenzoyl . Raffaele Piria reported the salicyl radical as the base for salicylic acid . Liebig published a definition of a radical in 1838 [ 7 ] [ 8 ] Berzelius and Robert Bunsen investigated the radical cacodyl (reaction of cacodyl chloride with zinc) around 1841, now also known as a dimer species (CH 3 ) 2 As—As(CH 3 ) 2 . [ 9 ] Edward Frankland and Hermann Kolbe contributed to the radical theory by investigating the ethyl and the methyl radicals. Frankland first reported diethylzinc in 1848. Frankland and Kolbe together investigated the reaction of ethyl cyanide and zinc in 1849 [ 10 ] reporting the isolation of not the ethyl radical but the methyl radical (CH 3 ) which in fact was ethane. Kolbe also investigated the electrolysis of potassium salts of some fatty acids. Acetic acid was regarded as the combination of the methyl radical and oxalic acid and electrolysis of the salt yielded as gas again ethane misidentified as the liberated methyl radical. In 1850 Frankland investigated ethyl radicals. [ 11 ] In the course of this work butane formed by reaction of ethyl iodide and zinc was mistakenly identified as the ethyl radical. August Wilhelm von Hofmann , Auguste Laurent and Charles Frédéric Gerhardt challenged Frankland and Kolbe by suggesting that the ethyl radical was in fact a dimer called dimethyl. Frankland and Kolbe countered that ethyl hydride was also a possibility [ 3 ] and in 1864 Carl Schorlemmer proved that dimethyl and ethyl hydride were in fact one and the same compound. Radical theory was eventually replaced by a number of theories each advocating specific entities. One adaptation of radical theory was called theory of types (theory of residues), advocated by Charles-Adolphe Wurtz , August Wilhelm von Hofmann and Charles Frédéric Gerhardt . Another was water type as promoted by Alexander William Williamson . Jean-Baptiste Dumas and Auguste Laurent (an early supporter of radical theory) challenged radical theory in 1840 with a Law of Substitution (or Theory of Substitution ). [ 3 ] This law acknowledged that any hydrogen atom even as part of a radical could be substituted by a halogen . Eventually Frankland in 1852 [ 12 ] and August Kekulé in 1857 [ 13 ] introduced valence theory with the tetravalency of carbon as its central theme, making trivalent carbon obsolete for the time being. In 1900 Moses Gomberg unexpectedly discovered true trivalent carbon and the first radical in the modern sense of the word in his (unsuccessful) attempt to make hexaphenylethane . [ 14 ] In current organic chemistry, concepts such as benzoyl [ 15 ] and acetyl [ 16 ] persist in chemical nomenclature but only to identify a functional group having the same fragment.
https://en.wikipedia.org/wiki/Radical_theory
A radio-controlled model (or RC model ) is a model that is steerable with the use of radio control (RC). All types of model vehicles have had RC systems installed in them, including ground vehicles , boats , planes , helicopters and even submarines and scale railway locomotives. World War II saw increased development in radio control technology. The Luftwaffe used controllable winged bombs for targeting Allied ships. During the 1930s the Good brothers Bill and Walt pioneered vacuum tube based control units for RC hobby use. Their "Guff" radio controlled plane is on display at the National Aerospace museum. Ed Lorenze published a design in Model Airplane News that was built by many hobbyists. Later, after WW2, in the late 1940s to mid 1950 many other RC designs emerged and some were sold commercially, Berkeley's Super Aerotrol, was one such example. Originally simple 'on-off' systems, these evolved to use complex systems of relays to control a rubber powered escapement's speed and direction. In another more sophisticated version developed by the Good brothers called TTPW, information was encoded by varying the signal's mark/space ratio (pulse proportional). Commercial versions of these systems quickly became available. The tuned reed system brought new sophistication, using metal reeds to resonate with the transmitted signal and operate one of a number of different relays. In the 1960s the availability of transistor -based equipment led to the rapid development of fully proportional servo -based "digital proportional" systems, achieved initially with discrete components, again driven largely by amateurs but resulting in commercial products. In the 1970s, integrated circuits made the electronics small, light and cheap enough for the 1960s-established multi-channel digital proportional systems to become much more widely available. In the 1990s miniaturised equipment became widely available, allowing radio control of the smallest models, and by the 2000s radio control was commonplace even for the control of inexpensive toys. At the same time the ingenuity of modellers has been sustained and the achievements of amateur modelers using new technologies has extended to such applications as gas-turbine powered aircraft, aerobatic helicopters and submarines. Before radio control, many models would use simple burning fuses or clockwork mechanisms to control flight or sailing times. Sometimes clockwork controllers would also control and vary direction or behaviour. Other methods included tethering to a central point (popular for model cars and hydroplanes), round the pole control for electric model aircraft and control lines (called u-control in the US) for internal combustion powered aircraft. The first general use of radio control systems in models started in the late 1940s with single-channel self-built equipment; commercial equipment came soon thereafter. Initially remote control systems used escapement , (often rubber driven) mechanical actuation in the model. Commercial sets often used ground standing transmitters, long whip antennas with separate ground poles and single vacuum tube receivers. The first kits had dual tubes for more selectivity. Such early systems were invariably super regenerative circuits, which meant that two controllers used in close proximity would interfere with one another. The requirement for heavy batteries to drive tubes also meant that model boat systems were more successful than model aircraft. The advent of transistors greatly reduced the battery requirements, since the current requirements at low voltage were greatly reduced and the high voltage battery was eliminated. Low cost systems employed a superregenerative transistor receiver sensitive to a specific audio tone modulation, the latter greatly reducing interference from 27 MHz Citizens' band radio communications on nearby frequencies. Use of an output transistor further increased reliability by eliminating the sensitive output relay , a device subject to both motor-induced vibration and stray dust contamination. In both tube and early transistor sets the model's control surfaces were usually operated by an electromagnetic escapement controlling the stored energy in a rubber-band loop, allowing simple rudder control (right, left, and neutral) and sometimes other functions such as motor speed, and kick-up elevator. [ 1 ] In the late 1950s, RC hobbyists had mastered tricks to manage proportional control of the flight control surfaces, for example by rapidly switching on and off reed systems, a technique called "skillful blipping" or more humorously "nervous proportional". [ 2 ] By the early 1960s transistors had replaced the tube and electric motors driving control surfaces were more common. The first low cost "proportional" systems did not use servos, but rather employed a bidirectional motor with a proportional pulse train that consisted of two tones, pulse-width modulated (TTPW). This system, and another commonly known as "Kicking Duck/Galloping Ghost", was driven with a pulse train that caused the rudder and elevator to "wag" though a small angle (not affecting flight owing to small excursions and high speed), with the average position determined by the proportions of the pulse train. A more sophisticated and unique proportional system was developed by Hershel Toomin of Electrosolids corporation called the Space Control. This benchmark system used two tones, pulse width and rate modulated to drive 4 fully proportional servos, and was manufactured and refined by Zel Ritchie, who ultimately gave the technology to the Dunhams of Orbit in 1964. The system was widely imitated, and others (Sampey, ACL, DeeBee) tried their hand at developing what was then known as analog proportional. But these early analog proportional radios were very expensive, putting them out of the reach for most modelers. Eventually, single-channel gave way to multi channel devices (at significantly higher cost) with various audio tones driving electromagnets affecting tuned resonant reeds for channel selection. Crystal oscillator superheterodyne receivers with better selectivity and stability made control equipment more capable and at lower cost. The constantly diminishing equipment weight was crucial to ever increasing modelling applications. Superheterodyne circuits became more common, enabling several transmitters to operate closely together and enabling further rejection of interference from adjacent Citizen's Band voice radio bands. Multi-channel developments were of particular use to aircraft which really needed a minimum of three control dimensions (yaw, pitch and motor speed), as opposed to boats which can be controlled with two or one. Radio control 'channels' were originally outputs from a reed array, in other words, a simple on-off switch. To provide a usable control signal a control surface needs to be moved in two directions, so at least two 'channels' would be needed unless a complex mechanical link could be made to provide two-directional movement from a single switch. Several of these complex links were marketed during the 1960s, including the Graupner Kinematic Orbit, Bramco, and Kraft simultaneous reed sets. Doug Spreng is credited with developing the first "digital" pulse-width feedback servo and along with Don Mathis developed and sold the first digital proportional radio called the "Digicon" followed by Bonner's Digimite, and Hoovers F&M Digital 5. With the electronics revolution, single-signal channel circuit design became redundant and instead, radios provided coded signal streams which a servomechanism could interpret. Each of these streams replaced two of the original 'channels', and, confusingly, the signal streams began to be called 'channels'. So an old on/off 6-channel transmitter which could drive the rudder, elevator and throttle of an aircraft was replaced with a new proportional 3-channel transmitter doing the same job. Controlling all the primary controls of a powered aircraft (rudder, elevator, ailerons and throttle) was known as 'full-house' control. A glider could be 'full-house' with only three channels. Soon a competitive marketplace emerged, bringing rapid development. By the 1970s the trend for 'full-house' proportional radio control was fully established. Typical radio control systems for radio-controlled models employ pulse-width modulation (PWM), pulse-position modulation (PPM) and more recently spread-spectrum technology, and actuate the various control surfaces using servomechanisms. These systems made 'proportional control' possible, where the position of the control surface in the model is proportional to the position of the control stick on the transmitter. PWM is most commonly used in radio control equipment today, where transmitter controls change the width (duration) of the pulse for that channel between 920 μs and 2120 μs, 1520 μs being the center (neutral) position. The pulse is repeated in a frame of between 10 and 30 milliseconds in length. Off-the-shelf servos respond directly to servo control pulse trains of this type using integrated decoder circuits, and in response they actuate a rotating arm or lever on the top of the servo. An electric motor and reduction gearbox is used to drive the output arm and a variable component such as a resistor " potentiometer " or tuning capacitor. The variable capacitor or resistor produces an error signal voltage proportional to the output position which is then compared with the position commanded by the input pulse and the motor is driven until a match is obtained. The pulse trains representing the whole set of channels is easily decoded into separate channels at the receiver using very simple circuits such as a Johnson counter . The relative simplicity of this system allows receivers to be small and light, and has been widely used since the early 1970s. Usually a single-chip 4017 decade counter is used inside the receiver to decode the transmitted multiplexed PPM signal to the individual "RC PWM" signals sent to each RC servo . [ 3 ] [ 4 ] [ 5 ] Often a Signetics NE544 IC or a functionally equivalent chip is used inside the housing of low-cost RC servos as the motor controller —it decodes that servo control pulse train to a position, and drives the motor to that position. [ 6 ] More recently, high-end hobby systems using Pulse-Code Modulation ( PCM ) features have come on the market that provide a digital bit -stream signal to the receiving device instead of analog type pulse modulation. Advantages include bit error checking capabilities of the data stream (good for signal integrity checking) and fail-safe options including motor (if the model has a motor) throttle down and similar automatic actions based on signal loss. However, those systems that use pulse code modulation generally induce more lag due to lesser frames sent per second as bandwidth is needed for error checking bits. PCM devices can only detect errors and thus hold the last verified position or go into failsafe mode. They cannot correct transmission errors. In the early 21st century, 2.4 gigahertz (GHz) transmissions have become increasingly utilised in high-end control of model vehicles and aircraft. This range of frequencies has many advantages. Because the 2.4 GHz wavelengths are so small (around 10 centimetres), the antennas on the receivers do not need to exceed 3 to 5 cm. Electromagnetic noise, for example from electric motors, is not 'seen' by 2.4 GHz receivers due to the noise's frequency (which tends to be around 10 to 150 MHz). The transmitter antenna only needs to be 10 to 20 cm long, and receiver power usage is much lower; batteries can therefore last longer. In addition, no crystals or frequency selection is required as the latter is performed automatically by the transmitter. However, the short wavelengths do not diffract as easily as the longer wavelengths of PCM/PPM, so 'line of sight' is required between the transmitting antenna and the receiver. Also, should the receiver lose power, even for a few milliseconds, or get 'swamped' by 2.4 GHz interference, it can take a few seconds for the receiver - which, in the case of 2.4 GHz, is almost invariably a digital device - to re-sync. RC electronics have three essential elements. The transmitter is the controller. Transmitters have control sticks, triggers, switches, and dials at the user's finger tips. The receiver is mounted in the model. It receives and processes the signal from the transmitter, translating it into signals that are sent to the servos and speed controllers . The number of servos in a model determines the number of channels the radio must provide. Typically the transmitter multiplexes and modulates the signal into pulse-position modulation . The receiver demodulates and demultiplexes the signal and translates it into the special kind of pulse-width modulation used by standard RC servos and controllers. In the 1980s, a Japanese electronics company, Futaba , copied wheeled steering for RC cars. It was originally developed by Orbit for a transmitter specially designed for Associated cars It has been widely accepted along with a trigger control for throttle . Often configured for right hand users, the transmitter looks like a pistol with a wheel attached on its right side. Pulling the trigger would accelerate the car forward, while pushing it would either stop the car or cause it to go into reverse. Some models are available in left-handed versions. There are thousands of RC vehicles available. Most are toys suitable for children. What separates toy grade RC from hobby grade RC is the modular characteristic of the standard RC equipment. RC toys generally have simplified circuits, often with the receiver and servos incorporated into one circuit. It's almost impossible to take that particular toy circuit and transplant it into other RCs. Hobby grade RC systems have modular designs. Many cars, boats, and aircraft can accept equipment from different manufacturers, so it is possible to take RC equipment from a car and install it into a boat, for example. However, moving the receiver component between aircraft and surface vehicles is illegal in most countries as radio frequency laws allocate separate bands for air and surface models. This is done for safety reasons. Most manufacturers now offer "frequency modules" (known as crystals) that simply plug into the back of their transmitters, allowing one to change frequencies, and even bands, at will. Some of these modules are capable of "synthesizing" many different channels within their assigned band. Hobby grade models can be fine tuned, unlike most toy grade models. For example, cars often allow toe-in , camber and caster angle adjustments, just like their real-life counterparts. All modern "computer" radios allow each function to be adjusted over several parameters for ease in setup and adjustment of the model. Many of these transmitters are capable of "mixing" several functions at once, which is required for some models. Many of the most popular hobby grade radios were first developed, and mass-produced in Southern California by Orbit, Bonner, Kraft, Babcock, Deans, Larson, RS, S&O, and Milcott. Later, Japanese companies like Futaba, Sanwa and JR took over the market. Radio-controlled aircraft (also called RC aircraft) are small aircraft that can be controlled remotely. There are many different types, ranging from small park flyers to large jets and mid-sized aerobatic models. The aircraft use many different methods of propulsion, ranging from brushed or brushless electric motors, to internal combustion engines, to the most expensive gas turbines . The fastest aircraft, dynamic slope soarers, can reach speeds of over 450 mph (720 km/h) by dynamic soaring , repeatedly circling through the gradient of wind speeds over a ridge or slope. [ 7 ] Newer jets can achieve above 300 mph (480 km/h) in a short distance. Radio-controlled tanks are replicas of armored fighting vehicles that can move, rotate the turret and some even shoot all by using the hand-held transmitter. Radio-controlled tanks are produced in numerous scale size for commercial offerings like: 1/35th scale. Probably the best known make in this scale is by Tamiya . 1/24 scale. This scale often includes a mounted Airsoftgun , the possibly the best offering is by Tokyo-Marui, but there are imitations by Heng Long, who offer cheap remakes of the tanks. The downsides to the Heng Long imitations are that they were standardized to their Type 90 tank which has 6 road wheels, then they produced a Leopard 2 and M1A2 Abrams on the same chassis but both of the tanks have 7 road wheels. 1/16 scale is the more intimidating vehicle design scale. Tamiya produce some of the best of this scale, these usually include realistic features like flashing lights, engine sounds, main gun recoil and - on their Leopard 2A6 - an optional gyro-stabilization system for the gun. Chinese manufacturers such as ( Heng Long and Matorro ) also produce a variety of high-quality 1/16 tanks and other AFVs. [ 8 ] Both the Tamiya and the Heng Long vehicles can make use of an Infra Red battle system, which attaches a small IR "gun" and target to the tanks, allowing them to engage in direct battle. As with cars, tanks can come from ready to run to a full assembly kit. In more private offerings there are 1/6 and 1/4 scale vehicles available. The largest RC tank available anywhere in the world is the King tiger in 1/4 scale, over 8 feet (2.4 m) long. These GRP fiberglass tanks were originally created and produced by Alex Shlakhter. A radio-controlled car is a powered model car driven from a distance. Gasoline , nitro-methanol and electric cars exist, designed to be run both on and off-road. "Gas" cars traditionally use petrol (gasoline), though many hobbyists run 'nitro' cars, using a mixture of methanol and nitromethane , to get their power. Logistic RC model include the following, Tractor unit , Semi-trailer truck , Semi-trailer , Terminal tractor , Refrigerator truck , Forklift truck , Empty Container handlers, and Reach stacker . Most of them are in 1:14 and run on electric motors. Radio-controlled helicopters, although often grouped with RC aircraft, are unique because of the differences in construction, aerodynamics and flight training. Several designs of RC helicopters exist, some with limited maneuverability (and thus easier to learn to fly), and those with more maneuverability (and thus harder to learn to fly). Radio-controlled boats are model boats controlled remotely with radio control equipment. The main types of RC boat are: scale models (12 inches (30 cm) – 144" (365 cm) in size), the sailing boat and the power boat . The latter is the more popular amongst toy grade models. Radio controlled models were used for the children's television program Theodore Tugboat . Out of radio-controlled model boats sprang up a new hobby—gas-powered model boating. Radio-controlled, gasoline-powered model boats first appeared in 1962 designed by engineer Tom Perzinka of Octura Models. [ citation needed ] The gas model boats were powered with O&R (Ohlsson and Rice) small 20 cc ignition gasoline utility engines. This was a completely new concept in the early years of available radio-control systems. The boat was called the "White Heat" and was a hydro design, meaning it had more than one wetted surface. Towards the late 1960s and early 1970s another gasoline-powered model was created and powered with a similar chainsaw engine. This boat was named "The Moppie" after its full-size counterpart. Again like the White Heat, between the costs of production, engine, and radio equipment, the project failed at market and perished. By 1970, nitro (glow ignition) power became the norm for model boating. In 1982 Tony Castronovo, a hobbyist in Fort Lauderdale, Florida, marketed the first production gasoline string trimmer engine powered (22 cc gasoline ignition engine) radio-controlled model boat in a 44-inch vee-bottom boat. It achieved a top speed of 30 miles per hour. The boat was marketed under the trade name "Enforcer" and sold by his company Warehouse Hobbies, Inc. The following years of marketing and distribution aided the spread of gasoline-powered model boating throughout the US, Europe, Australia, and many countries around the world. As of 2010, gasoline radio-controlled model boating has grown worldwide. The industry has spawned many manufacturers and thousands of model boaters. Today the average gasoline-powered boat can easily run at speeds over 45 mph, with the more exotic gas boats running at speeds exceeding 90 mph. This year also saw ML Boatworks develop laser cut wood scale hydroplane racing kits that rejuvenated a sector of the hobby that was turning to composite boats, instead of the classic art of building wood models. These kits also gave fast electric modelers a platform much needed in the hobby. Many of Tony Castronovo's designs and innovations in gasoline model boating are the foundation upon which the industry has been built. [ citation needed ] He was first to introduce surface drive on a Vee hull (propeller hub above the water line) to model boating which he named "SPD" (surface planing drive) as well as numerous products and developments relative to gasoline-powered model boating. He and his company continue to produce gasoline-powered model boats and components. Radio-controlled submarines can range from inexpensive toys to complex projects involving sophisticated electronics. Oceanographers and the Military also operate radio control submarines. The majority of robots used in shows such as Battlebots and Robot Wars are remotely controlled, relying on most of the same electronics as other radio-controlled vehicles. They are frequently equipped with weapons for the purpose of damaging opponents, including but not limited to hammering axes, "flippers" and spinners. Internal combustion engines for remote control models have typically been two stroke engines that run on specially blended fuel. Engine sizes are typically given in cm 3 or cubic inches, ranging from tiny engines like these .02 in 3 to huge 1.60 in 3 or larger. For even larger sizes, many modelers turn to four stroke or gasoline engines (see below.) Glow plug engines have an ignition device that possesses a platinum wire coil in the glow plug, that catalytically glows in the presence of the methanol in glow engine fuel, providing the combustion source. Since 1976, practical "glow" ignition four stroke model engines have been available on the market, ranging in size from 3.5 cm 3 upwards to 35 cm 3 in single cylinder designs. Various twin and multi-cylinder glow ignition four stroke model engines are also available, echoing the appearance of full sized radial , inline and opposed cylinder aircraft powerplants. The multi-cylinder models can become enormous, such as the Saito five cylinder radial. They tend to be quieter in operation than two stroke engines, using smaller mufflers, and also use less fuel. Glow engines tend to produce large amounts of oily mess due to the oil in the fuel. They are also much louder than electric motors. Another alternative is the gasoline engine. While glow engines run on special and expensive hobby fuel, gasoline runs on the same fuel that powers cars, lawnmowers, weed whackers etc. These typically run on a two-stroke cycle, but are radically different from glow two-stroke engines. They are typically much, much larger, like the 80 cm 3 Zenoah. These engines can develop several horsepower, incredible for something that can be held in the palm of the hand. Electric power is often the chosen form of power for aircraft, cars and boats. Electric power in aircraft in particular has become popular recently, mainly due to the popularity of park flyers and the development of technologies like brushless motors and lithium polymer batteries . These allow electric motors to produce much more power rivaling that of fuel-powered engines. It is also relatively simple to increase the torque of an electric motor at the expense of speed, while it is much less common to do so with a fuel engine, perhaps due to its roughness. This permits a more efficient larger-diameter propeller to be used which provides more thrust at lower airspeeds. (e.g. an electric glider climbing steeply to a good thermalling altitude.) In aircraft, cars, trucks and boats, glow and gas engines are still used even though electric power has been the most common form of power for a while. The following picture shows a typical brushless motor and speed controller used with radio controlled cars. As you can see, due to the integrated heat sink, the speed controller is almost as large as the motor itself. Due to size and weight limitations, heat sinks are not common in RC aircraft electronic speed controller (ESCs), therefore the ESC is almost always smaller than the motor. Remote Control: Most RC models make use of a handheld remote device with an antenna that sends signals to the vehicle's IR receiver. There are 2 different sticks. On the left is the stick to change the altitude of a flying vehicle or move a ground vehicle in forward or reverse . Sometimes the stick in flying model controllers can stay wherever the finger places it or it has to be held since underneath is a spring causing it to move back to its neutral position once released by the finger. Generally, in remotes used for ground moving RC vehicles the left stick's neutral position is in the centre. The right stick is for moving the flying vehicle around in the air in different directions and with grounds vehicles it is for steering. On the controller is also a trimmer setting which helps in keeping the vehicle focused in one direction. Mostly low grade RC vehicles will include a charging cable inside the remote with a green light indicating that the battery is in charge. Phone and tablet control: With the influence of touch screen devices mostly phones and tablets many RC vehicles can be controlled from any Apple or Android devices. On the operating system store is an app specifically for that particular RC model. The controls are almost identical to those on a physically used remote control when using virtual remote control but sometimes can vary from an actual controller depending on the type of vehicle. The device is not included with the vehicle set but the box does come with a radio chip to insert into the headset slot of any smartphone or tablet .
https://en.wikipedia.org/wiki/Radio-controlled_model
Radio-frequency (RF) engineering is a subset of electrical engineering involving the application of transmission line , waveguide , antenna , radar , and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band , the frequency range of about 20 kHz up to 300 GHz . [ 1 ] [ 2 ] [ 3 ] It is incorporated into almost everything that transmits or receives a radio wave , which includes, but is not limited to, mobile phones, radios, Wi-Fi , and two-way radios. RF engineering is a highly specialized field that typically includes the following areas of expertise: To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics , physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design. [ citation needed ] Radio electronics is concerned with electronic circuits which receive or transmit radio signals. Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit. List of radio electronics topics: Radio-frequency engineers are specialists in their respective field and can take on many different roles, such as design, installation, and maintenance. Radio-frequency engineers require many years of extensive experience in the area of study. This type of engineer has experience with transmission systems, device design, and placement of antennas for optimum performance. The RF engineer job description at a broadcast facility can include maintenance of the station's high-power broadcast transmitters and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL / TSL links, and more. In addition, a radio-frequency design engineer must be able to understand electronic hardware design, circuit board material, antenna radiation, and the effect of interfering frequencies that prevent optimum performance within the piece of equipment being developed. There are many applications of electromagnetic theory to radio-frequency engineering, using conceptual tools such as vector calculus and complex analysis . [ 5 ] [ 6 ] Topics studied in this area include waveguides and transmission lines , the behavior of radio antennas , and the propagation of radio waves through the Earth's atmosphere. Historically, the subject played a significant role in the development of nonlinear dynamics . [ 7 ]
https://en.wikipedia.org/wiki/Radio-frequency_engineering
Radio-frequency induction ( RF induction ) is the use of a radio frequency magnetic field to transfer energy by means of electromagnetic induction in the near field . A radio-frequency alternating current is passed through a coil of wire that acts as the transmitter , and a second coil or conducting object, magnetically coupled to the first coil, acts as the receiver . This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio-frequency_induction
A radio-frequency microelectromechanical system ( RF MEMS ) is a microelectromechanical system with electronic components comprising moving sub-millimeter-sized parts that provide radio-frequency (RF) functionality. [ 1 ] RF functionality can be implemented using a variety of RF technologies. Besides RF MEMS technology, III-V compound semiconductor ( GaAs , GaN , InP , InSb ), ferrite , ferroelectric , silicon -based semiconductor ( RF CMOS , SiC and SiGe ), and vacuum tube technology are available to the RF designer. Each of the RF technologies offers a distinct trade-off between cost, frequency , gain , large-scale integration , lifetime, linearity , noise figure , packaging , power handling , power consumption , reliability , ruggedness, size, supply voltage , switching time and weight. There are various types of RF MEMS components, such as CMOS integrable RF MEMS resonators and self-sustained oscillators with small form factor and low phase noise , RF MEMS tunable inductors , and RF MEMS switches , switched capacitors and varactors . The components discussed in this article are based on RF MEMS switches, switched capacitors and varactors. These components can be used instead of FET and HEMT switches (FET and HEMT transistors in common gate configuration), and PIN diodes. RF MEMS switches, switched capacitors and varactors are classified by actuation method ( electrostatic , electrothermal, magnetostatic , piezoelectric ), by axis of deflection (lateral, vertical), by circuit configuration ( series , shunt ), by clamp configuration ( cantilever , fixed-fixed beam ), or by contact interface ( capacitive , ohmic ). Electrostatically actuated RF MEMS components offer low insertion loss and high isolation, linearity, power handling and Q factor , do not consume power, but require a high control voltage and hermetic single-chip packaging ( thin film capping, LCP or LTCC packaging) or wafer-level packaging ( anodic or glass frit wafer bonding). RF MEMS switches were pioneered by IBM Research Laboratory , San Jose , CA , [ 2 ] [ 3 ] Hughes Research Laboratories , Malibu , CA, [ 4 ] Northeastern University in cooperation with Analog Devices , Boston , MA , [ 5 ] Raytheon , Dallas , TX , [ 6 ] [ 7 ] and Rockwell Science, Thousand Oaks , CA. [ 8 ] A capacitive fixed-fixed beam RF MEMS switch, as shown in Fig. 1(a), is in essence a micro-machined capacitor with a moving top electrode, which is the beam. It is generally connected in shunt with the transmission line and used in X - to W-band (77 GHz and 94 GHz) RF MEMS components. An ohmic cantilever RF MEMS switch, as shown in Fig. 1(b), is capacitive in the up-state, but makes an ohmic contact in the down-state. It is generally connected in series with the transmission line and is used in DC to the Ka-band components. From an electromechanical perspective, the components behave like a damped mass-spring system , actuated by an electrostatic force . The spring constant is a function of the dimensions of the beam, as well as the Young's modulus , the residual stress and the Poisson ratio of the beam material. The electrostatic force is a function of the capacitance and the bias voltage. Knowledge of the spring constant allows for hand calculation of the pull-in voltage, which is the bias voltage necessary to pull-in the beam, whereas knowledge of the spring constant and the mass allows for hand calculation of the switching time. From an RF perspective, the components behave like a series RLC circuit with negligible resistance and inductance. The up- and down-state capacitance are in the order of 50 fF and 1.2 pF, which are functional values for millimeter-wave circuit design. Switches typically have a capacitance ratio of 30 or higher, while switched capacitors and varactors have a capacitance ratio of about 1.2 to 10. The loaded Q factor is between 20 and 50 in the X-, Ku - and Ka-band. [ 9 ] RF MEMS switched capacitors are capacitive fixed-fixed beam switches with a low capacitance ratio. RF MEMS varactors are capacitive fixed-fixed beam switches which are biased below pull-in voltage. Other examples of RF MEMS switches are ohmic cantilever switches, and capacitive single pole N throw (SPNT) switches based on the axial gap wobble motor . [ 10 ] RF MEMS components are biased electrostatically using a bipolar NRZ drive voltage, as shown in Fig. 2, in order to avoid dielectric charging [ 11 ] and to increase the lifetime of the device. Dielectric charges exert a permanent electrostatic force on the beam. The use of a bipolar NRZ drive voltage instead of a DC drive voltage avoids dielectric charging whereas the electrostatic force exerted on the beam is maintained, because the electrostatic force varies quadratically with the DC drive voltage. Electrostatic biasing implies no current flow, allowing high-resistivity bias lines to be used instead of RF chokes . RF MEMS components are fragile and require wafer level packaging or single chip packaging which allow for hermetic cavity sealing. A cavity is required to allow movement, whereas hermeticity is required to prevent cancellation of the spring force by the Van der Waals force exerted by water droplets and other contaminants on the beam. RF MEMS switches, switched capacitors and varactors can be packaged using wafer level packaging. Large monolithic RF MEMS filters, phase shifters, and tunable matching networks require single chip packaging. Wafer-level packaging is implemented before wafer dicing , as shown in Fig. 3(a), and is based on anodic, metal diffusion, metal eutectic , glass frit, polymer adhesive , and silicon fusion wafer bonding. The selection of a wafer-level packaging technique is based on balancing the thermal expansion coefficients of the material layers of the RF MEMS component and those of the substrates to minimize the wafer bow and the residual stress, as well as on alignment and hermeticity requirements. Figures of merit for wafer-level packaging techniques are chip size, hermeticity, processing temperature , (in)tolerance to alignment errors and surface roughness . Anodic and silicon fusion bonding do not require an intermediate layer, but do not tolerate surface roughness. Wafer-level packaging techniques based on a bonding technique with a conductive intermediate layer (conductive split ring) restrict the bandwidth and isolation of the RF MEMS component. The most common wafer-level packaging techniques are based on anodic and glass frit wafer bonding. Wafer-level packaging techniques, enhanced with vertical interconnects, offer the opportunity of three-dimensional integration. Single-chip packaging, as shown in Fig. 3(b), is implemented after wafer dicing, using pre-fabricated ceramic or organic packages, such as LCP injection molded packages or LTCC packages. Pre-fabricated packages require hermetic cavity sealing through clogging, shedding , soldering or welding . Figures of merit for single-chip packaging techniques are chip size, hermeticity, and processing temperature. An RF MEMS fabrication process is based on surface micromachining techniques, and allows for integration of SiCr or TaN thin film resistors (TFR), metal-air-metal (MAM) capacitors, metal-insulator-metal (MIM) capacitors, and RF MEMS components. An RF MEMS fabrication process can be realized on a variety of wafers: III-V compound semi-insulating , borosilicate glass, fused silica ( quartz ), LCP, sapphire , and passivated silicon wafers. As shown in Fig. 4, RF MEMS components can be fabricated in class 100 clean rooms using 6 to 8 optical lithography steps with a 5 μm contact alignment error, whereas state-of-the-art MMIC and RFIC fabrication processes require 13 to 25 lithography steps. As outlined in Fig. 4, the essential microfabrication steps are: With the exception of the removal of the sacrificial spacer, which requires critical point drying, the fabrication steps are similar to CMOS fabrication process steps. RF MEMS fabrication processes, unlike BST or PZT ferroelectric and MMIC fabrication processes, do not require electron beam lithography , MBE , or MOCVD . Contact interface degradation poses a reliability issue for ohmic cantilever RF MEMS switches, whereas dielectric charging beam stiction, [ 12 ] as shown in Fig. 5(a), and humidity induced beam stiction, as shown in Fig. 5(b), pose a reliability issue for capacitive fixed-fixed beam RF MEMS switches. Stiction is the inability of the beam to release after removal of the drive voltage. A high contact pressure assures a low-ohmic contact or alleviates dielectric charging induced beam stiction. Commercially available ohmic cantilever RF MEMS switches and capacitive fixed-fixed beam RF MEMS switches have demonstrated lifetimes in excess of 100 billion cycles at 100 mW of RF input power. [ 13 ] [ 14 ] Reliability issues pertaining to high-power operation are discussed in the limiter section. RF MEMS resonators are applied in filters and reference oscillators. [ 15 ] RF MEMS switches, switched capacitors and varactors are applied in electronically scanned (sub)arrays ( phase shifters ) and software-defined radios ( reconfigurable antennas , tunable band-pass filters ). [ 16 ] Polarization and radiation pattern reconfigurability , and frequency tunability, are usually achieved by incorporation of III-V semiconductor components, such as SPST switches or varactor diodes. However, these components can be readily replaced by RF MEMS switches and varactors in order to take advantage of the low insertion loss and high Q factor offered by RF MEMS technology. In addition, RF MEMS components can be integrated monolithically on low-loss dielectric substrates, [ 17 ] such as borosilicate glass, fused silica or LCP, whereas III-V compound semi-insulating and passivated silicon substrates are generally lossier and have a higher dielectric constant . A low loss tangent and low dielectric constant are of importance for the efficiency and the bandwidth of the antenna. The prior art includes an RF MEMS frequency tunable fractal antenna for the 0.1–6 GHz frequency range, [ 18 ] and the actual integration of RF MEMS switches on a self-similar Sierpinski gasket antenna to increase its number of resonant frequencies , extending its range to 8 GHz, 14 GHz and 25 GHz, [ 19 ] [ 20 ] an RF MEMS radiation pattern reconfigurable spiral antenna for 6 and 10 GHz, [ 21 ] an RF MEMS radiation pattern reconfigurable spiral antenna for the 6–7 GHz frequency band based on packaged Radant MEMS SPST-RMSW100 switches, [ 22 ] an RF MEMS multiband Sierpinski fractal antenna , again with integrated RF MEMS switches, functioning at different bands from 2.4 to 18 GHz, [ 23 ] and a 2-bit Ka-band RF MEMS frequency tunable slot antenna . [ 24 ] The Samsung Omnia W was the first smart phone to include a RF MEMS antenna. [ 25 ] RF bandpass filters can be used to increase out-of-band rejection, in case the antenna fails to provide sufficient selectivity . Out-of-band rejection eases the dynamic range requirement on the LNA and the mixer in the light of interference . Off-chip RF bandpass filters based on lumped bulk acoustic wave (BAW), ceramic , SAW , quartz crystal, and FBAR resonators have superseded distributed RF bandpass filters based on transmission line resonators, printed on substrates with low loss tangent, or based on waveguide cavities. Tunable RF bandpass filters offer a significant size reduction over switched RF bandpass filter banks . They can be implemented using III-V semiconducting varactors, BST or PZT ferroelectric and RF MEMS resonators and switches, switched capacitors and varactors, and YIG ferrites. RF MEMS resonators offer the potential of on-chip integration of high-Q resonators and low-loss bandpass filters. The Q factor of RF MEMS resonators is in the order of 100–1000. [ 15 ] RF MEMS switch, switched capacitor and varactor technology, offers the tunable filter designer a compelling trade-off between insertion loss, linearity, power consumption, power handling, size, and switching time. [ 26 ] Passive subarrays based on RF MEMS phase shifters may be used to lower the amount of T/R modules in an active electronically scanned array . The statement is illustrated with examples in Fig. 6: assume a one-by-eight passive subarray is used for transmit as well as receive, with following characteristics: f = 38 GHz, G r = G t = 10 dBi , BW = 2 GHz, P t = 4 W . The low loss (6.75 ps /dB) and good power handling (500 mW) of the RF MEMS phase shifters allow an EIRP of 40 W and a G r /T of 0.036 1/K. EIRP, also referred to as the power-aperture product, is the product of the transmit gain, G t , and the transmit power, P t . G r /T is the quotient of the receive gain and the antenna noise temperature. A high EIRP and G r /T are a prerequisite for long-range detection. The EIRP and G r /T are a function of the number of antenna elements per subarray and of the maximum scanning angle. The number of antenna elements per subarray should be chosen in order to optimize the EIRP or the EIRP x G r /T product, as shown in Fig. 7 and Fig. 8. The radar range equation can be used to calculate the maximum range for which targets can be detected with 10 dB of SNR at the input of the receiver. in which k B is the Boltzmann constant , λ is the free-space wavelength, and σ is the RCS of the target. Range values are tabulated in Table 1 for following targets: a sphere with a radius, a, of 10 cm (σ = π a 2 ), a dihedral corner reflector with facet size, a, of 10 cm (σ = 12 a 4 /λ 2 ), the rear of a car (σ = 20 m 2 ) and for a non-evasive fighter jet (σ = 400 m 2 ). RF MEMS phase shifters enable wide-angle passive electronically scanned arrays , such as lens arrays , reflect arrays , subarrays and switched beamforming networks, with high EIRP and high G r /T. The prior art in passive electronically scanned arrays, includes an X-band continuous transverse stub (CTS) array fed by a line source synthesized by sixteen 5-bit reflect-type RF MEMS phase shifters based on ohmic cantilever RF MEMS switches, [ 27 ] [ 28 ] an X-band 2-D lens array consisting of parallel-plate waveguides and featuring 25,000 ohmic cantilever RF MEMS switches, [ 29 ] and a W-band switched beamforming network based on an RF MEMS SP4T switch and a Rotman lens focal plane scanner. [ 30 ] The usage of true-time-delay TTD phase shifters instead of RF MEMS phase shifters allows UWB radar waveforms with associated high range resolution, and avoids beam squinting or frequency scanning. TTD phase shifters are designed using the switched-line principle [ 8 ] [ 31 ] [ 32 ] or the distributed loaded-line principle. [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] Switched-line TTD phase shifters outperform distributed loaded-line TTD phase shifters in terms of time delay per decibel NF , especially at frequencies up to X-band, but are inherently digital and require low-loss and high-isolation SPNT switches. Distributed loaded-line TTD phase shifters, however, can be realized analogously or digitally, and in smaller form factors, which is important at the subarray level. Analog phase shifters are biased through a single bias line, whereas multibit digital phase shifters require a parallel bus along with complex routing schemes at the subarray level.
https://en.wikipedia.org/wiki/Radio-frequency_microelectromechanical_system
Radio frequency sweep or frequency sweep or RF sweep apply to scanning a radio frequency band for detecting signals being transmitted there. A radio receiver with an adjustable receiving frequency is used to do this. A display shows the strength of the signals received at each frequency as the receiver's frequency is modified to sweep (scan) the desired frequency band. A spectrum analyzer is a standard instrument used for RF sweep. It includes an electronically tunable receiver and a display. The display presents measured power (y axis) vs frequency (x axis). The power spectrum display is a two-dimensional display of measured power vs. frequency. The power may be either in linear units, or logarithmic units (dBm). Usually the logarithmic display is more useful, because it presents a larger dynamic range with better detail at each value. An RF sweep relates to a receiver which changes its frequency of operation continuously from a minimum frequency to a maximum (or from maximum to minimum). Usually the sweep is performed at a fixed, controllable rate, for example 5 MHz/sec. Some systems use frequency hopping , switching from one frequency of operation to another. One method of CDMA uses frequency hopping. Usually frequency hopping is performed in a random or pseudo-random pattern. Frequency sweeps may be used by regulatory agencies to monitor the radio spectrum , to ensure that users only transmit according to their licenses. The FCC for example controls and monitors the use of the spectrum in the U.S. In testing of new electronic devices, a frequency sweep may be done to measure the performance of electronic components or systems. For example, RF oscillators are measured for phase noise, harmonics and spurious signals; computers for consumer sale are tested to avoid radio frequency interference with radio systems. Portable sweep equipment may be used to detect some types of covert listening device (bugs). In professional audio , the optimum use of wireless microphones and wireless intercoms may require performing a sweep of the local radio spectrum, especially if many wireless devices are being used simultaneously. The sweep is generally limited in bandwidth to only the operating bandwidth of the wireless devices. For instance, at American Super Bowl games, audio engineers monitor (sweep) the radio spectrum in real time to make certain that all local wireless microphones are operating at previously agreed-upon and coordinated frequencies. [ 1 ]
https://en.wikipedia.org/wiki/Radio-frequency_sweep
Radio-paging code No. 1 (usually and hereafter called POCSAG ) is an asynchronous protocol used to transmit data to pagers . Its usual designation is an acronym of the P ost O ffice C ode S tandardisation A dvisory G roup, the name of the group that developed the code under the chairmanship of the British Post Office that used to operate most telecommunications in Britain before privatization. Before the development and adoption of the POCSAG code, pagers used one of several codes such as binary Golay code . In the 1990s new paging codes were developed that offered higher data transmission rates and other advanced features such as European and network roaming . The POCSAG code originally transmitted at 512 bits per second. Faster transmission at 1200 or 2400 bits per second using so-called Super-POCSAG has mostly displaced the POCSAG in the developed world but the transition is still in progress. In 1976 an international group of engineers began to meet to explore the possibility of developing a new code for wide area paging; paging networks covering regions of entire countries. These meetings were successful and in February 1981 the CCIR (Comité consultatif international pour la radio) the forerunner of the ITU-R accepted the code as Radiopaging Code No.1 (RPC No.1),(Rec, 584). The meetings were chaired by R.H.Tridgell and were attended by representatives of British, European, and Japanese pager manufacturers [ 1 ] The modulation used is frequency-shift keying (FSK) with a ±4.5 kHz shift on the carrier. The high frequency represents a 0 and the low frequency a 1. [ 2 ] The ±4.5 kHz frequency shift is used along with a 25 kHz channel spacing, known as "wideband". Some jurisdictions require that all systems move to a "narrowband" configuration, using 12.5 kHz channels and ±2.5 kHz frequency shifts (for example, the U.S. Federal Communications Commission (FCC) has mandated this transition be completed prior to 2013.). [ 3 ] Often single transmission channels contain blocks of data at more than one of the rates. Transmission uses 32-bit blocks called codewords. Each codeword carries 21 bits of information (bits 31 through 11), 10 bits of error-correcting code (bits 10 through 1), and an even parity bit (bit 0). Bits 31 through 1 are a binary BCH code (31, 21). The error-correcting code has a 6-bit Hamming distance : each 31-bit codeword differs from every other codeword in at least 6 bits. Consequently, the code can detect and correct up to 2 errors in a codeword. The generating polynomial g ( x ) for the BCH (31, 21) code is: [ 4 ] The codewords are either address or data, which is indicated by the first bit transmitted, bit 31. An address codeword contains 18 bits of address (bit 30 through to 13), and 2 function bits (12 & 11). Each data codeword carries 20 bits of data (bits 30 through to 11). Codewords are transmitted in batches that consist of a sync codeword, defined in the standard as 0x7CD215D8, followed by 16 payload codewords that are either address or data. Any unused codewords are filled with the idle value of 0x7A89C197. Although the address (also referred to as a RIC - Radio Identity Code or CAP code - Channel Access Protocol code) [ 5 ] is transmitted as 18 bits the actual address is 21-bits long: the remaining three bits are derived from which of the 8 pairs of codewords in the batch the address is sent in. This strategy allows the receiver to turn off for a considerable percentage of the time as it only needs to listen to the pair that applies to it, thus saving a significant amount of battery power. Before a burst of data there will always be a preamble of at least 576 bits of data containing alternating 1s and 0s, allowing the receiver to synchronize itself to the signal, and is another mechanism that enables the receiver to be turned off for a large percentage of the time. A message will start with an address codeword followed by a number of data codewords and will continue until another address, a sync, or an idle codeword is sent. When the data bits are extracted they will be in one of two formats. There are two message coding formats for the data messages. Numeric messages are sent as 4 bit BCD values, and alphanumeric messages are sent as 7-bit ASCII . The 7-bit ASCII is commonly referred to as 'alpha-paging', and 4-bit BCD is commonly referred to as 'numeric-paging'. BCD encoding packs 4 bit BCD symbols 5 to a codeword into bits 30-11. The most significant nibble (bits 30,29,28,27) is the leftmost (or most significant) of a BCD coded numeric datum. Values beyond 9 in each nibble (i.e. 0xA through 0xF) are encoded as follows: BCD messages are space padded with trailing 0xC's to fill the codeword. There is no POCSAG specified restriction on message length, but particular pagers of course have a fixed number of characters in their display. Alphanumeric messages are encoded in 7-bit ASCII characters packed into the 20 bit data area of a message codeword (bits 30-11). Since three seven bit characters are 21 rather than 20 bits and the designers of the standard did not want to waste transmission time, they chose to pack the first 20 bits of an ASCII message into the first code word, the next 20 bits of a message into the next codeword and so forth. What this means that a 7-bit ASCII character of a message that falls on a boundary can and will be split between two code words, and that the alignment of character boundaries in a particular alpha message code word depends on which code word it is of a message. The side benefit of this is a slightly increased error-correcting code reliability for messages that span more than one POCSAG packet. Within a codeword 7-bit characters are packed from left to right ( MSB to LSB ). The LSB of an ASCII character is sent first (is the MSB in the codeword) as per standard ASCII transmission conventions, so viewed as bits inside a codeword the characters are bit reversed. In the UK , most pager transmissions are in five bands at The frequency 466.075 MHz was previously used by Hutchison Paging, but the network was shut down in 2000. The frequency is still reserved for paging but is not used. In Germany , well known transmissions are at Licensed paging is possible in any other VHF/UHF bands. In Spain , nationwide service was provided by Telefónica Mensatel but the network was shut down in 2012. The Swedish pager network marketed as "Minicall" is encoded as POCSAG and broadcast on these frequencies: In Switzerland the following frequencies are used: The Belgium POCSAG is used for paging over the A.S.T.R.I.D. network: In Italy , the 26.225-26.935 MHz band (AM/FM, odd frequency steps) and 40.0125-40.0875 MHz (in 25 kHz steps) may be used for local pagers. These frequencies are often used for on-site hospital paging systems, including voice paging. Use of POCSAG on the 26 MHz and 27 MHz band has been logged by several listeners in Europe, specifically frequencies 26.350 MHz, 26.500 MHz, 26.705 MHz, 26.725 MHz, 26.755 MHz, 27.005 MHz, 27.007 MHz, 27.255 MHz (see note below regarding legal use of 27.255 MHz for paging in the United States). It appears that US-specification paging systems operating on 27.255 MHz have been sold in Italy and other European countries. The former monopoly operator SIP (which later became TIM) used the following frequencies for their pager service, called Teledrin: In France , POCSAG is operated by E*Message over the AlphaPage network on the 466 MHz frequency: [ 6 ] In addition to the bands listed above, paging may be authorized on any frequency in the land mobile bands authorized under Part 90 of the FCC rules, including frequencies in the 72-76 MHz band as well as the usual 30.56-49.58 MHz, 150.775-162.000 MHz VHF bands and the 450-470 MHz band (plus 421-430 or 470-512 MHz in certain cities). In larger metropolitan areas with congested frequency spectrum, paging services will often share the same frequency as land mobile stations, or operate on an adjacent channel. For example, a department store may operate handheld walkie-talkies on 462.7625 MHz while there are high power pager transmitters on 462.7500 MHz and/or 462.7750 MHz in the same city. Or, a restaurant will use 467.7500 MHz to alert customers when their table is ready (using so-called "coaster pagers") while a department store nearby uses 467.7500 MHz for their in-store communications. In both of these examples, the department store is forced to use a squelch system such as CTCSS or DCS . In many areas in the United States, these frequencies are used for land mobile (two-way) radio communications services in addition to paging. The VHF (152/157-158 MHz) and UHF (454/459 MHz) frequencies are often used for a mixture of paging and land mobile communications. The VHF low band (35/43 MHz) frequencies are mainly used for local hospital paging and in many areas are completely unused. Australia uses the following frequencies for localised paging, such as in hospitals, hotels and other facilities, and also as an Emergency communication system for fire services (such as the Victorian Country Fire Authority ) and for ambulances. Other paging systems for wide-area paging, such as commercial networks are licensed and operate anywhere in the VHF/UHF bands.
https://en.wikipedia.org/wiki/Radio-paging_code_No._1
The Radio Astronomy Lab ( RAL ) is an Organized Research Unit (ORU) within the Astronomy Department at the University of California, Berkeley . It was founded by faculty member Harold Weaver in 1958. Until 2012, RAL maintained a radio astronomy observatory at Hat Creek, near Mt. Lassen. [ 1 ] It continues to support on-campus laboratory facilities in Campbell Hall. From 1998 to 2012, the RAL collaborated with the SETI Institute of Mountain View California to design, build and operate the Allen Telescope Array (ATA). RAL has been central to the creation of several radio observatories, including: Current faculty include: [ 3 ]
https://en.wikipedia.org/wiki/Radio_Astronomy_Laboratory
Radio Galaxy Zoo (RGZ) is an internet crowdsourced citizen science project that seeks to locate supermassive black holes in distant galaxies. [ 1 ] [ 2 ] It is hosted by the web portal Zooniverse . The scientific team want to identify black hole/jet pairs and associate them with the host galaxies. Using a large number of classifications provided by citizen scientists they hope to build a more complete picture of black holes at various stages and their origin. [ 3 ] [ 4 ] It was initiated in 2010 by Ray Norris in collaboration with the Zooniverse team, and was driven by the need to cross-identify the millions of extragalactic radio sources that will be discovered by the forthcoming Evolutionary Map of the Universe survey. RGZ is now led by scientists Julie Banfield and Ivy Wong. [ 5 ] RGZ started operations on 17 December 2013, [ 3 ] and ceased collecting new classifications on 1 May 2019. [ 6 ] The project's scientific team are drawn mostly from Australia, with support from Zooniverse developers and other institutions. [ 7 ] They use data taken by the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey which was observed at the Very Large Array between 1993 and 2011. Also used was data from the Australia Telescope Large Area Survey (ATLAS), taken with the Australia Telescope Compact Array (ATCA) in rural New South Wales . The infrared astronomy used was observed by Wide-field Infrared Survey Explorer (WISE) and the Spitzer Space Telescope . [ 7 ] RGZ has published five scientific studies (May 2018). i) Radio Galaxy Zoo: host galaxies and radio morphologies derived from visual inspection. (November 2015) [ 1 ] [ 8 ] The abstract begins: "We present results from the first twelve months of operation of Radio Galaxy Zoo, which upon completion will enable visual inspection of over 170,000 radio sources to determine the host galaxy of the radio emission and the radio morphology." [ 1 ] It then explains that RGZ "uses 1.4GHz radio images from both the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) and the Australia Telescope Large Area Survey (ATLAS) in combination with mid-infrared images at 3.4μm from the Wide-field Infrared Survey Explorer (WISE) and at 3.6μm from the Spitzer Space Telescope." [ 1 ] Its aims are that when complete, RGZ will measure the relative populations and properties of host galaxies; processes that might also provide an avenue for finding radio structures that are rare and extreme. [ 1 ] On the International Centre for Radio Astronomy Research (ICRAR) website, an article from September 2015 named "Volunteer black hole hunters as good as the experts" explains how citizen scientists are as good as professionals at RGZ's tasks. [ 9 ] The research team tested trained citizen scientists and ten professional astronomers using a hundred images to help quantify the quality of the data gathered. As the initial results were published, facts and figures from RGZ became available. More than 1.2 million radio images have been looked at, which enabled 60,000 radio sources to be matched to their host galaxies: "A feat that would have taken a single astronomer working 40 hours a week roughly 50 years to complete." [ 9 ] ii) Radio Galaxy Zoo: discovery of a poor cluster through a giant wide-angle tail radio galaxy. (May 2016) [ 10 ] [ 11 ] The abstract begins: "We have discovered a previously unreported poor cluster of galaxies (RGZ-CL J0823.2+0333) through an unusual giant wide-angle tail radio galaxy found in the Radio Galaxy Zoo project." It continues to explain that the analysis of 2MASX J08231289+0333016's surrounding environment indicates that it is within a poor cluster. Radio morphology suggests that, firstly, "the host galaxy is moving at a significant velocity with respect to an ambient medium like that of at least a poor cluster" and secondly that "the source may have had two ignition events of the active galactic nucleus with 10^7yrs in between." [ 10 ] These suggestions reinforce the idea that there is an association between RGZ J082312.9+033301 and the newly discovered poor cluster. [ 10 ] On The Conversation website in an article "How citizen scientists discovered a giant cluster of galaxies", Ray Norris writes about the above study. [ 5 ] He explains that two Russian citizen scientists (CSs), Ivan Terentev and Tim Matorny, were participating in RGZ when they noticed something odd with one of the radio sources. It became clear that the radio source the two CSs had found "was just one of a line of radio blobs that delineate a C-shaped “wide angle tail galaxy” (WATG)." Lead scientist Julie Banfield explained that this was "something that none of us had even thought would be possible." [ 5 ] WATGs are rare objects that are formed when jets of electrons from black holes , usually seen to be straight, are bent into a C shape by intergalactic gas . This characteristic shape is "a sure sign that there is intergalactic gas, signifying a cluster of galaxies, the largest known objects in the universe." [ 5 ] The WATG discovered by Terentev and Matorny is one of the largest known and has led to the cluster being named after them. "This cluster, more than a billion light years away, contains at least 40 galaxies, marking an intersection of the sheets and filaments of the cosmic web that make up our universe." [ 5 ] Clusters, despite their importance, are hard to find but the use of WATGs might be a way of finding more: However WATGs are rare. On the National Radio Astronomy Observatory website, Matorny and Terentev commented on their discovery. “I am still amazed and feel more motivated to look for stunning new radio galaxies,” Matorny said. [ 12 ] Terentev added, “I got a chance to see the whole process of science … and I have been a part of it!” [ 12 ] iii) Radio Galaxy Zoo: A Search for Hybrid Morphology Radio Galaxies. (December 2017) [ 13 ] The abstract begins: "Hybrid morphology radio sources are a rare type of radio galaxy that display different Fanaroff-Riley classes on opposite sides of their nuclei." The authors explain that RGZ has enabled them to discover 25 new candidate hybrid morphology radio galaxies (HyMoRS). These HyMoRS are at distances between redshifts z=0.14 and 1.0. Nine of the host galaxies have previous spectra and include quasars and a rare Green bean galaxy . It states: "Although the origin of the hybrid morphology radio galaxies is still unclear, this type of radio source starts depicting itself as a rather diverse class." [ 13 ] The abstract ends:"While high angular resolution follow-up observations are still necessary to confirm our candidates, we demonstrate the efficacy of the Radio Galaxy Zoo in the pre-selection of these sources from all-sky radio surveys, and report the reliability of citizen scientists in identifying and classifying complex radio sources." [ 13 ] In an article on the ARC Centre of Excellence for All-Sky Astrophysics CAASTRO website named "Citizen scientists bag a bunch of 'two-faced' galaxies", the author explains the findings of the above study. [ 14 ] The lead scientist is Anna Kapinska with CS Ivan Terentev named second. Kapinska's team have been looking for rare types of galaxies named Hybrid Morphology Radio Galaxies (HyMoRS). These show galaxy characteristics that are combined, rather than distinct. The article states: "Finding more HyMoRS helps us understand what kind of galaxy can turn out this way, and what gives them their unusual properties. Knowing that, in turn, helps us better understand how all galaxies evolve." [ 14 ] The first recognised HyMoRS was discovered in 2002 and since then 30 more. RGZ near doubled the discoveries by adding 25 more. Galaxies with black holes that produce jets are often "divided into two classes, Fanaroff-Riley I and Fanaroff-Riley II (or FR I and II). FR I galaxies have jets that fade away as they extend outwards, while FR II galaxies have jets that end in a bright, strongly-emitting region (a ‘hotspot’)." [ 14 ] Explanations include the behaviour of the central black hole, different densities of matter in the surrounding environment or simply illusions because of different distances. [ 14 ] iv) Radio Galaxy Zoo: Cosmological Alignment of Radio Sources (November 2017) [ 15 ] In November 2017, a team led by Omar Contigiani published a paper in Monthly Notices of the Royal Astronomical Society studying the mutual alignment of radio sources. [ 15 ] Using data drawn from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) and TIFR GMRT Sky Survey (TGSS), they investigate the most powerful radio sources, namely the largest elliptical galaxies emitting plasma-filled jets. The abstract begins: "We study the mutual alignment of radio sources within two surveys, FIRST and TGSS. This is done by producing two position angle catalogues containing the preferential directions of respectively 30059 and 11674 extended sources distributed over more than 7000 and 17000 square degrees." [ 15 ] The FIRST sample sources were identified by participants in RGZ, while the TGSS sample was the result of an automated process. Marginal evidence of local alignment is found in the FIRST sample, which has a 2% probability of being by chance. This supports other recent research by scientists using the Giant Metrewave Radio Telescope . The abstract ends: "The TGSS sample is found to be too sparsely populated to manifest a similar signal." Results suggest that there is a relative alignment present at cosmological distances. [ 15 ] v) Radio Galaxy Zoo: Compact and extended radio source classification with deep learning (May 2018). [ 16 ] In May 2018, Lukic and team published a study in Monthly Notices of the Royal Astronomical Society concerning machine learning techniques. The abstract begins: "Machine learning techniques have been increasingly useful in astronomical applications over the last few years, for example in the morphological classification of galaxies." [ 16 ] During the next two years, up to 105 RGZ objects will be observed with the Hubble Space Telescope (HST) as a result of Program 15445, whose P.I. is William Keel. [ 17 ] [ 18 ] The program's abstract begins: "The classic Galaxy Zoo project and its successors have been rich sources of interesting astrophysics beyond their initial goals. Green Pea starbursts , AGN ionization echoes , dust in backlit spirals , AGN in pseudobulges, have all seen HST followup programs." [ 17 ] As a result of NASA 'gap fillers' initiative, it is hoped that significant scientific progress can be made by HST observations of a total of 304 objects, which have been chosen by voters using a Zooniverse custom-made interface. [ 17 ] Keel stated: "Each one of them might not be enough for an individual study, but when you put them all together it adds up to an interesting study." [ 18 ] Gems of the Galaxy Zoos finished in September 2023 after imaging 193 of the 300 candidates. Many of the images can be viewed on Wikimedia Commons.
https://en.wikipedia.org/wiki/Radio_Galaxy_Zoo
A Radio Interface Layer ( RIL ) is a layer in an operating system which provides an interface to the hardware's radio and modem on e.g. a mobile phone. The Android Open Source Project provides a Radio Interface Layer (RIL) between Android telephony services (android.telephony) and the radio hardware. It consists of a stack of two components: a RIL Daemon and a Vendor RIL. The RIL Daemon talks to the telephony services and dispatches "solicited commands" to the Vendor RIL. The Vendor RIL is specific to a particular radio implementation, and dispatches "unsolicited commands" up to the RIL Daemon. [ 1 ] A RIL is a key component of Microsoft's Windows Mobile OS. The RIL enables wireless voice or data applications to communicate with a GSM/GPRS or CDMA2000 1X modem on a Windows Mobile device. The RIL provides the system interface between the CellCore layer within the Windows Mobile OS and the radio protocol stack used by the wireless modem hardware. The RIL, therefore, also allows OEMs to integrate a variety of modems into their equipment by providing this interface. The RIL comprises two separate components: a RIL driver, which processes AT commands and events; and a RIL proxy, which manages requests from the multiple clients to the single RIL driver. Except for PPP connections, all interaction between the Windows Mobile OS and the device radio stack is via the RIL. (PPP connections initially use the RIL to establish the connection, but then bypass the RIL to connect directly to the virtual serial port assigned to the modem.) In essence, the RIL accepts and converts all direct service requests from the upper layers (i.e., TAPI) into commands supported and understood by the modem. Note that the RIL does not communicate directly with the modem, however. Instead, the final link to the modem is typically the standard serial driver provided by the OEM's platform. This article about wireless technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio_Interface_Layer
Radio Link Protocol ( RLP ) is an automatic repeat request ( ARQ ) fragmentation protocol used over a wireless (typically cellular) air interface. Most wireless air interfaces are tuned to provide 1% packet loss , and most Vocoders are mutually tuned to sacrifice very little voice quality at 1% packet loss. However, 1% packet loss is intolerable to all variants of TCP, and so something must be done to improve reliability for voice networks carrying TCP/IP data. A RLP detects packet losses and performs retransmissions to bring packet loss down to .01%, or even .0001%, which is suitable for TCP/IP applications. RLP also implements stream fragmentation and reassembly, and sometimes, in-order delivery. Newer forms of RLP also provide framing and compression, while older forms of RLP rely upon a higher-layer PPP protocols to provide these functions. A RLP transport cannot ask the air interface to provide a certain payload size. Instead, the air interface scheduler determines the packet size, based upon constantly changing channel conditions, and upcalls RLP with the chosen packet payload size, right before transmission. Most other fragmentation protocols, such as those of 802.11b and IP, used payload sizes determined by the upper layers, and call upon the MAC to create a payload of a certain size. These other protocols are not as flexible as RLP, and can sometimes fail to transmit during a deep fade in a wireless environment. Because a RLP payload size can be as little as 11 bytes, based upon a CDMA IS-95 network's smallest voice packet size, RLP headers must be very small, to minimize overhead. This is typically achieved by allowing both ends to negotiate a variable 'sequence number space', which is used to number each byte in the transmission stream. In some variants of RLP, this sequence counter can be as small as 6 bits. A RLP protocol can be ACK-based or NAK-based . Most RLPs are NAK-based, meaning that forward-link sender assumes that each transmission got through, and the receiver only NAKs when an out-of-order segment is received. This greatly reduces reverse-link transmissions, which are spectrally inefficient and have a longer latency on most cellular networks. When the transmit pipeline goes idle, a NAK-based RLP must eventually retransmit the last segment a second time, in case the last fragment was lost, to reach a .01% packet loss rate. This duplicate transmission is typically controlled by a "flush timer" set to expire 200-500 milliseconds after the channel goes idle. The concept of a RLP protocol was invented by Phil Karn in 1990 for CDMA (IS-95) networks. The January 2006 IEEE 802.20 specification uses one of the newest forms of RLP. Cellular networks such as GSM and CDMA use different variations of RLP. In UMTS and in LTE , the protocol is called RLC (Radio Link Control). This computer networking article is a stub . You can help Wikipedia by expanding it . This article about wireless technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio_Link_Protocol
Radio astronomy is a subfield of astronomy that studies celestial objects at radio frequencies . The first detection of radio waves from an astronomical object was in 1933, when Karl Jansky at Bell Telephone Laboratories reported radiation coming from the Milky Way . Subsequent observations have identified a number of different sources of radio emission. These include stars and galaxies , as well as entirely new classes of objects, such as radio galaxies , quasars , pulsars , and masers . The discovery of the cosmic microwave background radiation , regarded as evidence for the Big Bang theory , was made through radio astronomy. Radio astronomy is conducted using large radio antennas referred to as radio telescopes , that are either used singularly, or with multiple linked telescopes utilizing the techniques of radio interferometry and aperture synthesis . The use of interferometry allows radio astronomy to achieve high angular resolution , as the resolving power of an interferometer is set by the distance between its components, rather than the size of its components. Radio astronomy differs from radar astronomy in that the former is a passive observation (i.e., receiving only) and the latter an active one (transmitting and receiving). Before Jansky observed the Milky Way in the 1930s, physicists speculated that radio waves could be observed from astronomical sources. In the 1860s, James Clerk Maxwell 's equations had shown that electromagnetic radiation is associated with electricity and magnetism , and could exist at any wavelength . Several attempts were made to detect radio emission from the Sun , including an experiment by German astrophysicists Johannes Wilsing and Julius Scheiner in 1896 and a centimeter wave radiation apparatus set up by Oliver Lodge between 1897 and 1900. These attempts were unable to detect any emission due to technical limitations of the instruments. The discovery of the radio-reflecting ionosphere in 1902 led physicists to conclude that the layer would bounce any astronomical radio transmission back into space, making them undetectable. [ 1 ] Karl Jansky made the discovery of the first astronomical radio source serendipitously in the early 1930s. As a newly hired radio engineer with Bell Telephone Laboratories , he was assigned the task to investigate static that might interfere with short wave transatlantic voice transmissions. Using a large directional antenna , Jansky noticed that his analog pen-and-paper recording system kept recording a persistent repeating signal or "hiss" of unknown origin. Since the signal peaked about every 24 hours, Jansky first suspected the source of the interference was the Sun crossing the view of his directional antenna. Continued analysis, however, showed that the source was not following the 24-hour daily cycle of the Sun exactly but instead repeating on a cycle of 23 hours and 56 minutes. Jansky discussed the puzzling phenomena with his friend, astrophysicist Albert Melvin Skellett, who pointed out that the observed time between the signal peaks was the exact length of a sidereal day : the time it took for "fixed" astronomical objects, such as a star, to pass in front of the antenna every time the Earth rotated. [ 2 ] By comparing his observations with optical astronomical maps, Jansky eventually concluded that the radiation source peaked when his antenna was aimed at the densest part of the Milky Way in the constellation of Sagittarius . [ 3 ] Jansky announced his discovery at a meeting in Washington, D.C., in April 1933 and the field of radio astronomy was born. [ 4 ] In October 1933, his discovery was published in a journal article entitled "Electrical disturbances apparently of extraterrestrial origin" in the Proceedings of the Institute of Radio Engineers . [ 5 ] Jansky concluded that since the Sun (and therefore other stars) were not large emitters of radio noise, the strange radio interference may be generated by interstellar gas and dust in the galaxy, in particular, by "thermal agitation of charged particles." [ 2 ] [ 6 ] (Jansky's peak radio source, one of the brightest in the sky, was designated Sagittarius A in the 1950s and was later hypothesized to be emitted by electrons in a strong magnetic field. Current thinking is that these are ions in orbit around a massive black hole at the center of the galaxy at a point now designated as Sagittarius A*. The asterisk indicates that the particles at Sagittarius A are ionized.) [ 7 ] [ 8 ] [ 9 ] [ 10 ] After 1935, Jansky wanted to investigate the radio waves from the Milky Way in further detail, but Bell Labs reassigned him to another project, so he did no further work in the field of astronomy. His pioneering efforts in the field of radio astronomy have been recognized by the naming of the fundamental unit of flux density , the jansky (Jy), after him. [ 11 ] Grote Reber was inspired by Jansky's work, and built a parabolic radio telescope 9m in diameter in his backyard in 1937. He began by repeating Jansky's observations, and then conducted the first sky survey in the radio frequencies. [ 12 ] On February 27, 1942, James Stanley Hey , a British Army research officer, made the first detection of radio waves emitted by the Sun. [ 13 ] Later that year, George Clark Southworth , [ 14 ] at Bell Labs like Jansky, also detected radiowaves from the Sun. Both researchers were bound by wartime security surrounding radar, so Reber, who was not, published his 1944 findings first. [ 15 ] Several other people independently discovered solar radio waves, including E. Schott in Denmark [ 16 ] and Elizabeth Alexander working on Norfolk Island . [ 17 ] [ 18 ] [ 19 ] [ 20 ] At Cambridge University , where ionospheric research had taken place during World War II , J. A. Ratcliffe along with other members of the Telecommunications Research Establishment that had carried out wartime research into radar , created a radiophysics group at the university where radio wave emissions from the Sun were observed and studied. This early research soon branched out into the observation of other celestial radio sources and interferometry techniques were pioneered to isolate the angular source of the detected emissions. Martin Ryle and Antony Hewish at the Cavendish Astrophysics Group developed the technique of Earth-rotation aperture synthesis . The radio astronomy group in Cambridge went on to found the Mullard Radio Astronomy Observatory near Cambridge in the 1950s. During the late 1960s and early 1970s, as computers (such as the Titan ) became capable of handling the computationally intensive Fourier transform inversions required, they used aperture synthesis to create a 'One-Mile' and later a '5 km' effective aperture using the One-Mile and Ryle telescopes, respectively. They used the Cambridge Interferometer to map the radio sky, producing the Second (2C) and Third (3C) Cambridge Catalogues of Radio Sources. [ 21 ] Radio astronomers use different techniques to observe objects in the radio spectrum. Instruments may simply be pointed at an energetic radio source to analyze its emission. To "image" a region of the sky in more detail, multiple overlapping scans can be recorded and pieced together in a mosaic image. The type of instrument used depends on the strength of the signal and the amount of detail needed. Observations from the Earth 's surface are limited to wavelengths that can pass through the atmosphere. At low frequencies or long wavelengths, transmission is limited by the ionosphere , which reflects waves with frequencies less than its characteristic plasma frequency . Water vapor interferes with radio astronomy at higher frequencies, which has led to building radio observatories that conduct observations at millimeter wavelengths at very high and dry sites to minimize the water vapor content in the line of sight. Finally, transmitting devices on Earth may cause radio-frequency interference . Because of this, many radio observatories are built at remote places. Radio telescopes may need to be extremely large in order to receive signals with low signal-to-noise ratio . Also since angular resolution is a function of the diameter of the " objective " in proportion to the wavelength of the electromagnetic radiation being observed, radio telescopes have to be much larger in comparison to their optical counterparts. For example, a 1-meter diameter optical telescope is two million times bigger than the wavelength of light observed giving it a resolution of roughly 0.3 arc seconds , whereas a radio telescope "dish" many times that size may, depending on the wavelength observed, only be able to resolve an object the size of the full moon (30 minutes of arc). The difficulty in achieving high resolutions with single radio telescopes led to radio interferometry , developed by British radio astronomer Martin Ryle and Australian engineer, radiophysicist, and radio astronomer Joseph Lade Pawsey and Ruby Payne-Scott in 1946. The first use of a radio interferometer for an astronomical observation was carried out by Payne-Scott, Pawsey and Lindsay McCready on 26 January 1946 using a single converted radar antenna (broadside array) at 200 MHz near Sydney, Australia . This group used the principle of a sea-cliff interferometer in which the antenna (formerly a World War II radar) observed the Sun at sunrise with interference arising from the direct radiation from the Sun and the reflected radiation from the sea. With this baseline of almost 200 meters, the authors determined that the solar radiation during the burst phase was much smaller than the solar disk and arose from a region associated with a large sunspot group. The Australia group laid out the principles of aperture synthesis in a groundbreaking paper published in 1947. The use of a sea-cliff interferometer had been demonstrated by numerous groups in Australia, Iran and the UK during World War II, who had observed interference fringes (the direct radar return radiation and the reflected signal from the sea) from incoming aircraft. The Cambridge group of Ryle and Vonberg observed the Sun at 175 MHz for the first time in mid-July 1946 with a Michelson interferometer consisting of two radio antennas with spacings of some tens of meters up to 240 meters. They showed that the radio radiation was smaller than 10 arc minutes in size and also detected circular polarization in the Type I bursts. Two other groups had also detected circular polarization at about the same time ( David Martyn in Australia and Edward Appleton with James Stanley Hey in the UK). Modern radio interferometers consist of widely separated radio telescopes observing the same object that are connected together using coaxial cable , waveguide , optical fiber , or other type of transmission line . This not only increases the total signal collected, but it can also be used in a process called aperture synthesis to vastly increase resolution. This technique works by superposing (" interfering ") the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is the size of the antennas furthest apart in the array. To produce a high-quality image, a large number of different separations between different telescopes are required (the projected separation between any two telescopes as seen from the radio source is called a "baseline") – as many different baselines as possible are required in order to get a good quality image. For example, the Very Large Array has 27 telescopes giving 351 independent baselines at once. Beginning in the 1970s, improvements in the stability of radio telescope receivers permitted telescopes from all over the world (and even in Earth orbit) to be combined to perform very-long-baseline interferometry . Instead of physically connecting the antennas, data received at each antenna is paired with timing information, usually from a local atomic clock , and then stored for later analysis on magnetic tape or hard disk. At that later time, the data is correlated with data from other antennas similarly recorded, to produce the resulting image. Using this method, it is possible to synthesise an antenna that is effectively the size of the Earth. The large distances between the telescopes enable very high angular resolutions to be achieved, much greater in fact than in any other field of astronomy. At the highest frequencies, synthesised beams less than 1 milliarcsecond are possible. The pre-eminent VLBI arrays operating today are the Very Long Baseline Array (with telescopes located across North America) and the European VLBI Network (telescopes in Europe, China, South Africa and Puerto Rico). Each array usually operates separately, but occasional projects are observed together producing increased sensitivity. This is referred to as Global VLBI. There are also a VLBI networks, operating in Australia and New Zealand called the LBA (Long Baseline Array), [ 22 ] and arrays in Japan, China and South Korea which observe together to form the East-Asian VLBI Network (EAVN). [ 23 ] Since its inception, recording data onto hard media was the only way to bring the data recorded at each telescope together for later correlation. However, the availability today of worldwide, high-bandwidth networks makes it possible to do VLBI in real time. This technique (referred to as e-VLBI) was originally pioneered in Japan, and more recently adopted in Australia and in Europe by the EVN (European VLBI Network) who perform an increasing number of scientific e-VLBI projects per year. [ 24 ] Radio astronomy has led to substantial increases in astronomical knowledge, particularly with the discovery of several classes of new objects, including pulsars , quasars [ 25 ] and radio galaxies . This is because radio astronomy allows us to see things that are not detectable in optical astronomy. Such objects represent some of the most extreme and energetic physical processes in the universe. The cosmic microwave background radiation was also first detected using radio telescopes. However, radio telescopes have also been used to investigate objects much closer to home, including observations of the Sun and solar activity, and radar mapping of the planets . Other sources include: Earth's radio signal is mostly natural and stronger than for example Jupiter's but is produced by Earth's auroras and bounces at the ionosphere back into space. [ 27 ] Radio astronomy service (also: radio astronomy radiocommunication service ) is, according to Article 1.58 of the International Telecommunication Union's (ITU) Radio Regulations (RR), [ 28 ] defined as "A radiocommunication service involving the use of radio astronomy". Subject of this radiocommunication service is to receive radio waves transmitted by astronomical or celestial objects. The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012). [ 29 ] To improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. In line to the appropriate ITU Region , the frequency bands are allocated (primary or secondary) to the radio astronomy service as follows. MOBILE-SATELLITE RADIO ASTRONOMY AERONAUTICAL MOBILE-SATELLITE RADIO ASTRONOMY AERONAUTICAL RADIODETERMINATION- MOBILE-SATELLITE RADIO ASTRONOMY AERONAUTICAL Radiodetermination-
https://en.wikipedia.org/wiki/Radio_astronomy
Radio channel emulators or radio channel simulators (also called fading simulators ) are tools for air interface testing in wireless communication . In a test environment, radio channel emulators replace the real-world radio channel between a radio transmitter and a receiver by providing a faded representation of a transmitted signal to the receiver inputs. As technology moves forward to take advantage of more complex channel characteristics such as MIMO , the channel modeling needed to accurately emulate the radio environment becomes even more critical to a test setup. Radio channel emulators enable creation of mathematical models representing the physical radio signal transmission medium. [ 1 ] The complex nature of a MIMO system creates unique measurement challenges in order to provide a test environment that fully simulates a real-world wireless channel. The radio channel emulator must have channel models that accurately simulate multiple antenna performance, including correlation between antenna elements. This article related to radio communications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio_channel_emulator
Radio-frequency (RF) engineering is a subset of electrical engineering involving the application of transmission line , waveguide , antenna , radar , and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band , the frequency range of about 20 kHz up to 300 GHz . [ 1 ] [ 2 ] [ 3 ] It is incorporated into almost everything that transmits or receives a radio wave , which includes, but is not limited to, mobile phones, radios, Wi-Fi , and two-way radios. RF engineering is a highly specialized field that typically includes the following areas of expertise: To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics , physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design. [ citation needed ] Radio electronics is concerned with electronic circuits which receive or transmit radio signals. Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit. List of radio electronics topics: Radio-frequency engineers are specialists in their respective field and can take on many different roles, such as design, installation, and maintenance. Radio-frequency engineers require many years of extensive experience in the area of study. This type of engineer has experience with transmission systems, device design, and placement of antennas for optimum performance. The RF engineer job description at a broadcast facility can include maintenance of the station's high-power broadcast transmitters and associated systems. This includes transmitter site emergency power, remote control, main transmission line and antenna adjustments, microwave radio relay STL / TSL links, and more. In addition, a radio-frequency design engineer must be able to understand electronic hardware design, circuit board material, antenna radiation, and the effect of interfering frequencies that prevent optimum performance within the piece of equipment being developed. There are many applications of electromagnetic theory to radio-frequency engineering, using conceptual tools such as vector calculus and complex analysis . [ 5 ] [ 6 ] Topics studied in this area include waveguides and transmission lines , the behavior of radio antennas , and the propagation of radio waves through the Earth's atmosphere. Historically, the subject played a significant role in the development of nonlinear dynamics . [ 7 ]
https://en.wikipedia.org/wiki/Radio_electronics
Radio fingerprinting is a process that identifies a cellular phone or any other radio transmitter by the fingerprint that characterizes its signal transmission and is hard to imitate. An electronic fingerprint makes it possible to identify a wireless device by its radio transmission characteristics. Radio fingerprinting is commonly used by cellular operators to prevent cloning of cell phones — a cloned device will have the same numeric equipment identity but a different radio fingerprint. Essentially, each transmitter (cell phones are just one type of radio transmitter) has a rise time signature when first keyed which is caused by the slight variations of component values during manufacture. Once the rise time signature is captured and assigned to a callsign, the use of a different transmitter using the same callsign is easily detected. Such systems are used in military signals intelligence and by radio regulatory agencies such as the U.S. Federal Communications Commission (FCC) for identifying illegal transmitters. They are also used for assessing usage for billing purposes in Subscriber Mobile Radio (SMR) systems. This topic has garnered great attention in recent years as the radio fingerprinting technique offers a "physical layer" authentication solution, which can provide fundamentally superior performance than traditional higher-layer encryption solutions. The topic has been studied by various researchers across multiple disciplines, including Signal Processing, Antenna and Propagation and Computer Science. [ 1 ] [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Radio_fingerprinting
The radio hat was a portable radio built into a pith helmet that would bring in stations within a 20-mile (32 km) radius. It was introduced in early 1949 for $7.95 as the "Man-from-Mars Radio Hat." [ 1 ] Thanks to a successful publicity campaign, the radio hat was sold at stores from coast to coast in the United States. The radio hat was manufactured by American Merri-Lei Corporation of Brooklyn N.Y. The company was a leading supplier of party hats , noise makers and other novelty items. Its founder, Victor Hoeflich, had invented a machine to make paper Hawaiian leis while still in high-school (1914), and by 1949 the company shipped millions of leis to Hawaii each year. An inventor and gadgeteer, [ 2 ] [ 3 ] [ 4 ] Hoeflich continued to develop and even sell machinery that manufactured paper novelty items. [ 5 ] [ 6 ] Battery-operated portable radios had been available for many years, but Hoeflich hoped a radio with innovative packaging and a publicity campaign could be a runaway success. The transistor had just been invented, but was still an expensive laboratory curiosity; the first pocket transistor radio was still 5 years away. This radio would have to use the existing vacuum tube technology and the tubes would be a prominent design feature. The loop antenna and the tuning knob were also visible. The hat was available in eight colors: Lipstick Red, Tangerine, Flamingo, Canary Yellow, Chartreuse, Blush Pink, Rose Pink and Tan. [ 7 ] [ 8 ] In March 1949, Victor Hoeflich held a press conference to introduce the "Man from Mars, Radio Hat". Hoeflich knew a picture would tell the story so he had several teenagers modeling the radio hats for the reporters and photographers. Soon pictures and news stories appeared in newspapers coast to coast. [ 9 ] [ 10 ] The articles typically included a photo of a young lady wearing the hat and a six-paragraph story. The radio hat also received widespread coverage in magazines. This included do-it-yourself magazines such as Popular Mechanics , [ 11 ] Popular Science , [ 12 ] Mechanix Illustrated , [ 13 ] and Radio-Electronics . There was also coverage in general-audience magazines such as Life , Time , [ 14 ] Newsweek, and The New Yorker . [ 6 ] The radio hat was sold in department stores and by mail order. [ 1 ] A Van Nuys, California service station chain sold the hats as a promotion item to customers who purchased gasoline. [ 15 ] The massive publicity did not lead to lasting sales. Advertisements for the radio hat stopped in early 1950. In a 1956 interview, Hoeflich said the company still got orders for the hat even though it was long out of production. [ 5 ] Hugo Gernsback , the Editor of Radio-Electronics , was impressed with the radio hat and the June 1949 issue had a two-page article describing the circuitry and construction of the radio. The cover photograph shows a 15-year-old Hope Lange wearing a Lipstick Red hat. [ 8 ] She went on to become an award-winning stage, film, and television actress. She was nominated for the 1957 Academy Award for Best Supporting Actress for her role as Selena Cross in the film Peyton Place . [ 16 ] [ 17 ] Radios at this time usually were powered by the AC mains. They used vacuum tubes that had a 6 or 12 volt filament supply that heated the cathode; and a 100 to 300 volt anode (or B+) supply. The technological advances in World War II for mobile radios produced inexpensive low power vacuum tubes. The radio hat had an internal battery pack that provided 1.5 volts for the filaments and the 22.5 volt B+ supply. These were much safer voltages for use in a hat, especially since the full plate voltage is dropped across the earphone. This technique was commonly used in many simple radios, some having ninety or more volts present across the head or earphones. The battery pack would power the radio for up to 20 hours. The radio received the AM broadcast band (540 kHz to 1600 kHz) and was tuned by a knob between the two tubes. (Table top or console radio receivers of the day used 5 or 6 tubes to provide better performance.) The 1S5 tube functioned as a regenerative detector . Audio detected by the 1S5 was resistance-coupled to the 3V4, where it was amplified and supplied to the earphone. The detector was provided with a cathode feedback level well into the oscillation range by the 330 pF capacitor. The received carrier blocked the oscillations, allowing strong local stations to be received clearly. In addition, the loop antenna was part of the resonant tuning circuit, resulting in near-unity coupling between the antenna and the detector, which helped provide a high enough level of carrier for the blocking function. A regenerative detector operated in this mode is sometimes called a superregenerative detector , but in this circuit there was no separate quenching oscillator. The blocking signal was ideally at the same frequency as the oscillation, as opposed to the usually lower frequency employed in a true superregenerative detector. The regenerative detector in the radio hat had adequate sensitivity to receive stations much more distant than the stipulated twenty-mile range, but distant stations would not have had a strong enough carrier to block the oscillations and so would be received with an objectionable heterodyne , audible as an astable squealing noise. Furthermore, the loop antenna was somewhat directional. This was a limitation for a portable radio; the signal level could vary when the listener turned their head. If the target station was accidentally nulled, the carrier signal could fall below blocking level, resulting in an annoying squealing heterodyne similar to that present on stations outside the normal range of the radio.
https://en.wikipedia.org/wiki/Radio_hat
A radio latino is a measuring instrument used in surveying and military engineering starting in the 16th century. It gets its name from the inventor, Latino Orsini . The radio latino can be considered a kind of geometric square . [ 1 ] It was a general purpose instrument that could be used for a variety of angular measurements as well as depth and inside dimension measures. The slider (blue in the adjacent diagram) could move along the central rod, causing the deltoid formed by four other rods to change shape symmetrically. The end points of the rods had sights on them, allowing various sight lines to be defined. The central rod was graduated with various scales. These scales allowed the angles between the end rods (represented by the red lines in the diagram) to be determined as well as the angle with its vertex at one end of the main rod and sides (represented by the green lines in the diagram) through the outer joints of the rods. With different graduations, one could determine or lay out: When folded, the radio latino would resemble a sword and was stored in a sheath or scabbard . The radio latino was usually constructed of brass . The central, main rod was graduated with multiple scales. The free end of the main rod had a handle attached. Within the handle, a small compass was mounted. The two end-most side rods were shorter than the two attached to the slider. This permitted the end rods to be set to any angle up to 180°. The slider could move along the main rod and was used as an index for reading the engraved scales. Each hinged vertex had a sighting vane. This permitted the instrument to be used to measure or lay out angles or other dimensions visually.
https://en.wikipedia.org/wiki/Radio_latino
Radio maps , [ 1 ] [ 2 ] [ 3 ] [ 4 ] also known as radio environment maps , [ 5 ] describe how radio waves spread across a geographical region. The main types of radio maps are signal strength maps and propagation maps . Signal strength maps provide a metric that quantifies the received power at each location. In turn, propagation maps characterize the propagation channel between arbitrary pairs of locations. Radio maps can be used in a large number of applications, especially in the context of wireless communications . For instance, network operators can use radio maps to determine where to deploy new base stations or how to allocate frequencies . Signal strength maps quantify signal strength at each location. Formally, a signal strength map can be seen as a function γ ( r ) {\displaystyle \gamma (\mathbf {r} )} that provides a signal strength metric for each location r {\displaystyle \mathbf {r} } . Here, r {\displaystyle \mathbf {r} } is a vector that contains the spatial coordinates of the location of interest. Oftentimes, a signal strength map is represented by a matrix or tensor Γ {\displaystyle \mathbf {\Gamma } } that collects the values of γ ( r ) {\displaystyle \gamma (\mathbf {r} )} on a set of points r {\displaystyle \mathbf {r} } that form a regular grid . The types of signal strength maps, presented below, are determined by the signal strength metric that they provide. [ 6 ] In coverage maps , γ ( r ) {\displaystyle \gamma (\mathbf {r} )} takes a binary value that indicates whether the received signal strength meets a certain quality objective. For example, in the case of digitally- modulated signals, such a quality objective can be a maximum admissible bit error rate . Coverage maps are mainly used by operators to visualize the areas in which a certain service is successfully provided. The positions and sizes of regions with poor coverage can inform the operators on locations where new base stations can be deployed. In outage probability maps, γ ( r ) {\displaystyle \gamma (\mathbf {r} )} is the outage probability at location r {\displaystyle \mathbf {r} } . Therefore, this kind of maps provides more rich information than coverage maps, since they may indicate the fraction of the time in which the signal strength meets the desired objective. Outages may occur for example due to small fading, due to moving obstacles in the signal propagation paths, or due to excessive interference . In power maps, γ ( r ) {\displaystyle \gamma (\mathbf {r} )} is the received signal strength at r {\displaystyle \mathbf {r} } . This information is more detailed than the information provided by coverage or outage probability maps, which just indicate whether the signal strength is below or above a certain threshold. This is important because, depending on the signal strength, a certain radiocommunication link may adopt a different modulation and coding . This is the case, for example, of cellular communications . Power spectral density (PSD) maps return the PSD at each location. Therefore, they are functions of the form γ ( r ; f ) {\displaystyle \gamma (\mathbf {r} ;f)} , where f {\displaystyle f} is the frequency variable . They constitute the most detailed form of radio maps, as they provide the distribution of signal power not only across space but also across the frequency domain. PSD maps may be used e.g. by network operators to determine which frequency bands contain most interference. Propagation maps characterize signal propagation between arbitrary pairs of locations. For this reason, a propagation radio map is a function γ ( r 1 , r 2 ) {\displaystyle \gamma (\mathbf {r} _{1},\mathbf {r} _{2})} of two locations r 1 {\displaystyle \mathbf {r} _{1}} and r 2 {\displaystyle \mathbf {r} _{2}} . In the case of channel-gain maps , γ ( r 1 , r 2 ) {\displaystyle \gamma (\mathbf {r} _{1},\mathbf {r} _{2})} is the gain of the channel when the transmitter is at r 1 {\displaystyle \mathbf {r} _{1}} and the receiver at r 2 {\displaystyle \mathbf {r} _{2}} (or viceversa). A typical approach to construct a radio map is via ray-tracing software. These programs use a 3D model of the region of interest to predict how the waves radiated by a certain transmitter propagate to every location. A more traditional approach is to use a radio propagation model. Some of these models are based on electromagnetic propagation theory, whereas others are empirical. Radio map estimation (RME) comprises a collection of techniques used to estimate a radio map from measurements across the area of interest. These measurements may be collected by sensors or, simply, by communication terminals, which also act as sensors. In many practical scenarios, RME may be more convenient than simulation approaches such as ray-tracing since the latter require detailed 3D models of the propagation scenario, which are seldom available in practice. The most common algorithms for RME are Kriging , kernel methods , and deep learning .
https://en.wikipedia.org/wiki/Radio_map
Radio modems are modems that transfer data wirelessly across a range of up to tens of kilometres. Using radio modems is a modern way to create Private Radio Networks (PRN). Private radio networks are used in critical industrial applications, when real-time data communication is needed. Radio modems enable users to be independent of telecommunication or satellite network operators. In most cases users use licensed frequencies either in the UHF or VHF bands. In certain areas licensed frequencies may be reserved for a given user, thus ensuring that there is less likelihood of radio interference from other RF transmitters . Also licence free frequencies are available in most countries, enabling easy implementation, but at the same time other users may use the same frequency, thus making it possible that a given frequency is blocked. Typical users for radio modems are: land survey differential GPS , fleet management applications, SCADA applications (utility distribution networks), automated meter reading (AMR), telemetry applications and many more. Since applications usually require high reliability of data transfer and very high uptime, radio performance plays a key role. Factors influencing radio performance are: antenna height and type, the sensitivity of the radio, the output power of the radio and the complete system design. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio_modem
Radio navigation or radionavigation is the application of radio waves to determine a position of an object on the Earth , either the vessel or an obstruction. [ 1 ] [ 2 ] Like radiolocation , it is a type of radiodetermination . The basic principles are measurements from/to electric beacons , especially Combinations of these measurement principles also are important—e.g., many radars measure range and azimuth of a target. [ citation needed ] These systems used some form of directional radio antenna to determine the location of a broadcast station on the ground. Conventional navigation techniques are then used to take a radio fix . These were introduced prior to World War I, and remain in use today. [ citation needed ] The first system of radio navigation was the Radio Direction Finder , or RDF. [ 3 ] By tuning in a radio station and then using a directional antenna , one could determine the direction to the broadcasting antenna. A second measurement using another station was then taken. Using triangulation , the two directions can be plotted on a map where their intersection reveals the location of the navigator. [ 4 ] [ 5 ] Commercial AM radio stations can be used for this task due to their long range and high power, but strings of low-power radio beacons were also set up specifically for this task, especially near airports and harbours. [ citation needed ] Early RDF systems normally used a loop antenna , a small loop of metal wire that is mounted so it can be rotated around a vertical axis. [ 3 ] At most angles the loop has a fairly flat reception pattern, but when it is aligned perpendicular to the station the signal received on one side of the loop cancels the signal in the other, producing a sharp drop in reception known as the "null". By rotating the loop and looking for the angle of the null, the relative bearing of the station can be determined. Loop antennas can be seen on most pre-1950s aircraft and ships. [ citation needed ] The main problem with RDF is that it required a special antenna on the vehicle, which may not be easy to mount on smaller vehicles or single-crew aircraft. A smaller problem is that the accuracy of the system is based to a degree on the size of the antenna, but larger antennas would likewise make the installation more difficult. [ citation needed ] During the era between World War I and World War II , a number of systems were introduced that placed the rotating antenna on the ground. As the antenna rotated through a fixed position, typically due north, the antenna was keyed with the morse code signal of the station's identification letters so the receiver could ensure they were listening to the right station. Then they waited for the signal to either peak or disappear as the antenna briefly pointed in their direction. By timing the delay between the morse signal and the peak/null, then dividing by the known rotational rate of the station, the bearing of the station could be calculated. [ citation needed ] The first such system was the German Telefunken Kompass Sender , which began operations in 1907 and was used operationally by the Zeppelin fleet until 1918. [ 6 ] An improved version was introduced by the UK as the Orfordness Beacon in 1929 and used until the mid-1930s. A number of improved versions followed, replacing the mechanical motion of the antennas with phasing techniques that produced the same output pattern with no moving parts. One of the longest lasting examples was Sonne , which went into operation just before World War II and was used operationally under the name Consol until 1991. The modern VOR system is based on the same principles (see below). [ citation needed ] A great advance in the RDF technique was introduced in the form of phase comparisons of a signal as measured on two or more small antennas, or a single highly directional solenoid . These receivers were smaller, more accurate, and simpler to operate. Combined with the introduction of the transistor and integrated circuit , RDF systems were so reduced in size and complexity that they once again became quite common during the 1960s, and were known by the new name, automatic direction finder , or ADF. [ citation needed ] This also led to a revival in the operation of simple radio beacons for use with these RDF systems, now referred to as non-directional beacons (NDB). As the LF/MF signals used by NDBs can follow the curvature of earth, NDB has a much greater range than VOR which travels only in line of sight . NDB can be categorized as long range or short range depending on their power. The frequency band allotted to non-directional beacons is 190–1750 kHz, but the same system can be used with any common AM-band commercial station. [ citation needed ] VHF omnidirectional range , or VOR, is an implementation of the reverse-RDF system, but one that is more accurate and able to be completely automated. [ citation needed ] The VOR station transmits two audio signals on a VHF carrier – one is Morse code at 1020 Hz to identify the station, the other is a continuous 9960 Hz audio modulated at 30 Hz, with the 0-degree referenced to magnetic north. This signal is rotated mechanically or electrically at 30 Hz, which appears as a 30 Hz AM signal added to the previous two signals, the phasing of which is dependent on the position of the aircraft relative to the VOR station. [ citation needed ] The VOR signal is a single RF carrier that is demodulated into a composite audio signal composed of a 9960 Hz reference signal frequency modulated at 30 Hz, a 30 Hz AM reference signal, and a 1020 Hz 'marker' signal for station identification. Conversion from this audio signal into a usable navigation aid is done by a navigation converter, which takes the reference signal and compares the phasing with the variable signal. The phase difference in degrees is provided to navigational displays. Station identification is by listening to the audio directly, as the 9960 Hz and 30 Hz signals are filtered out of the aircraft internal communication system, leaving only the 1020 Hz Morse-code station identification. [ citation needed ] The system may be used with a compatible glideslope and marker beacon receiver, making the aircraft ILS-capable (Instrument Landing System)}. Once the aircraft's approach is accurate (the aircraft is in the "right place"), the VOR receiver will be used on a different frequency to determine if the aircraft is pointed in the "right direction." Some aircraft will usually employ two VOR receiver systems, one in VOR-only mode to determine "right place" and another in ILS mode in conjunction with a glideslope receiver to determine "right direction." }The combination of both allows for a precision approach in foul weather. [ 7 ] Beam systems broadcast narrow signals in the sky, and navigation is accomplished by keeping the aircraft centred in the beam. A number of stations are used to create an airway , with the navigator tuning in different stations along the direction of travel. These systems were common in the era when electronics were large and expensive, as they placed minimum requirements on the receivers – they were simply voice radio sets tuned to the selected frequencies. However, they did not provide navigation outside of the beams, and were thus less flexible in use. The rapid miniaturization of electronics during and after World War II made systems like VOR practical, and most beam systems rapidly disappeared. [ citation needed ] In the post-World War I era, the Lorenz company of Germany developed a means of projecting two narrow radio signals with a slight overlap in the center. By broadcasting different audio signals in the two beams, the receiver could position themselves very accurately down the centreline by listening to the signal in their headphones. The system was accurate to less than a degree in some forms. [ citation needed ] Originally known as "Ultrakurzwellen-Landefunkfeuer" (LFF), or simply "Leitstrahl" (guiding beam), little money was available to develop a network of stations. The first widespread radio navigation network, using Low and Medium Frequencies, was instead led by the US (see LFF, below). Development was restarted in Germany in the 1930s as a short-range system deployed at airports as a blind landing aid. Although there was some interest in deploying a medium-range system like the US LFF, deployment had not yet started when the beam system was combined with the Orfordness timing concepts to produce the highly accurate Sonne system. In all of these roles, the system was generically known simply as a "Lorenz beam". Lorenz was an early predecessor to the modern Instrument Landing System . [ citation needed ] In the immediate pre-World War II era the same concept was also developed as a blind-bombing system. This used very large antennas to provide the required accuracy at long distances (over England), and very powerful transmitters. Two such beams were used, crossing over the target to triangulate it. Bombers would enter one of the beams and use it for guidance until they heard the second one in a second radio receiver, using that signal to time the dropping of their bombs. The system was highly accurate, and the ' Battle of the Beams ' broke out when United Kingdom intelligence services attempted, and then succeeded, in rendering the system useless through electronic warfare . [ citation needed ] The low-frequency radio range (LFR, also "Four Course Radio Range" among other names) was the main navigation system used by aircraft for instrument flying in the 1930s and 1940s in the U.S. and other countries, until the advent of the VOR in the late 1940s. It was used for both en route navigation as well as instrument approaches . [ citation needed ] The ground stations consisted of a set of four antennas that projected two overlapping directional figure-eight signal patterns at a 90-degree angle to each other. One of these patterns was "keyed" with the Morse code signal "A", dit-dah, and the second pattern "N", dah-dit. This created two opposed "A" quadrants and two opposed "N" quadrants around the station. The borders between these quadrants created four course legs or "beams" and if the pilot flew down these lines, the "A" and "N" signal merged into a steady "on course" tone and the pilot was "on the beam". If the pilot deviated to either side the "A" or "N" tone would become louder and the pilot knew to make a correction. The beams were typically aligned with other stations to produce a set of airways , allowing an aircraft to travel from airport to airport by following a selected set of stations. Effective course accuracy was about three degrees, which near the station provided sufficient safety margins for instrument approaches down to low minimums. At its peak deployment, there were over 400 LFR stations in the US. [ 8 ] The remaining widely used beam systems are glide path and the localizer of the instrument landing system (ILS). ILS uses a localizer to provide horizontal position and glide path to provide vertical positioning. ILS can provide enough accuracy and redundancy to allow automated landings. For more information see also: Positions can be determined with any two measures of angle or distance. The introduction of radar in the 1930s provided a way to directly determine the distance to an object even at long distances. Navigation systems based on these concepts soon appeared, and remained in widespread use until recently. Today they are used primarily for aviation, although GPS has largely supplanted this role. [ citation needed ] Early radar systems, like the UK's Chain Home , consisted of large transmitters and separate receivers. The transmitter periodically sends out a short pulse of a powerful radio signal, which is sent into space through broadcast antennas. When the signal reflects off a target, some of that signal is reflected back in the direction of the station, where it is received. The received signal is a tiny fraction of the broadcast power, and has to be powerfully amplified in order to be used. [ citation needed ] The same signals are also sent over local electrical wiring to the operator's station, which is equipped with an oscilloscope . Electronics attached to the oscilloscope provides a signal that increases in voltage over a short period of time, a few microseconds. When sent to the X input of the oscilloscope, this causes a horizontal line to be displayed on the scope. This "sweep" is triggered by a signal tapped off the broadcaster, so the sweep begins when the pulse is sent. Amplified signals from the receiver are then sent to the Y input, where any received reflection causes the beam to move upward on the display. This causes a series of "blips" to appear along the horizontal axis, indicating reflected signals. By measuring the distance from the start of the sweep to the blip, which corresponds to the time between broadcast and reception, the distance to the object can be determined. [ citation needed ] Soon after the introduction of radar, the radio transponder appeared. Transponders are a combination of receiver and transmitter whose operation is automated – upon reception of a particular signal, normally a pulse on a particular frequency, the transponder sends out a pulse in response, typically delayed by some very short time. Transponders were initially used as the basis for early IFF systems; aircraft with the proper transponder would appear on the display as part of the normal radar operation, but then the signal from the transponder would cause a second blip to appear a short time later. Single blips were enemies, double blips friendly. [ citation needed ] Transponder-based distance-distance navigation systems have a significant advantage in terms of positional accuracy. Any radio signal spreads out over distance, forming the fan-like beams of the Lorenz signal, for instance. As the distance between the broadcaster and receiver grows, the area covered by the fan increases, decreasing the accuracy of location within it. In comparison, transponder-based systems measure the timing between two signals, and the accuracy of that measure is largely a function of the equipment and nothing else. This allows these systems to remain accurate over very long range. [ citation needed ] The latest transponder systems (mode S) can also provide position information, possibly derived from GNSS , allowing for even more precise positioning of targets. [ citation needed ] The first distance-based navigation system was the German Y-Gerät blind-bombing system. This used a Lorenz beam for horizontal positioning, and a transponder for ranging. A ground-based system periodically sent out pulses which the airborne transponder returned. By measuring the total round-trip time on a radar's oscilloscope, the aircraft's range could be accurately determined even at very long ranges. An operator then relayed this information to the bomber crew over voice channels, and indicated when to drop the bombs. [ citation needed ] The British introduced similar systems, notably the Oboe system. This used two stations in England that operated on different frequencies and allowed the aircraft to be triangulated in space. To ease pilot workload only one of these was used for navigation – prior to the mission a circle was drawn over the target from one of the stations, and the aircraft was directed to fly along this circle on instructions from the ground operator. The second station was used, as in Y-Gerät, to time the bomb drop. Unlike Y-Gerät, Oboe was deliberately built to offer very high accuracy, as good as 35 m, much better than even the best optical bombsights . [ citation needed ] One problem with Oboe was that it allowed only one aircraft to be guided at a time. This was addressed in the later Gee-H system by placing the transponder on the ground and broadcaster in the aircraft. The signals were then examined on existing Gee display units in the aircraft (see below). Gee-H did not offer the accuracy of Oboe, but could be used by as many as 90 aircraft at once. This basic concept has formed the basis of most distance measuring navigation systems to this day. [ citation needed ] The key to the transponder concept is that it can be used with existing radar systems. The ASV radar introduced by RAF Coastal Command was designed to track down submarines and ships by displaying the signal from two antennas side by side and allowing the operator to compare their relative strength. Adding a ground-based transponder immediately turned the same display into a system able to guide the aircraft towards a transponder, or "beacon" in this role, with high accuracy. [ citation needed ] The British put this concept to use in their Rebecca/Eureka system, where battery-powered "Eureka" transponders were triggered by airborne "Rebecca" radios and then displayed on ASV Mk. II radar sets. Eureka's were provided to French resistance fighters, who used them to call in supply drops with high accuracy. The US quickly adopted the system for paratroop operations, dropping the Eureka with pathfinder forces or partisans, and then homing in on those signals to mark the drop zones. [ citation needed ] The beacon system was widely used in the post-war era for blind bombing systems. Of particular note were systems used by the US Marines that allowed the signal to be delayed in such a way to offset the drop point. These systems allowed the troops at the front line to direct the aircraft to points in front of them, directing fire on the enemy. Beacons were widely used for temporary or mobile navigation as well, as the transponder systems were generally small and low-powered, able to be man portable or mounted on a Jeep . [ citation needed ] In the post-war era, a general navigation system using transponder-based systems was deployed as the distance measuring equipment (DME) system. [ citation needed ] DME was identical to Gee-H in concept, but used new electronics to automatically measure the time delay and display it as a number, rather than having the operator time the signals manually on an oscilloscope. This led to the possibility that DME interrogation pulses from different aircraft might be confused, but this was solved by having each aircraft send out a different series of pulses which the ground-based transponder repeated back. DME is almost always used in conjunction with VOR, and is normally co-located at a VOR station. This combination allows a single VOR/DME station to provide both angle and distance, and thereby provide a single-station fix. DME is also used as the distance-measuring basis for the military TACAN system, and their DME signals can be used by civilian receivers. [ citation needed ] Hyperbolic navigation systems are a modified form of transponder systems which eliminate the need for an airborne transponder. The name refers to the fact that they do not produce a single distance or angle, but instead indicate a location along any number of hyperbolic lines in space. Two such measurements produces a fix. As these systems are almost always used with a specific navigational chart with the hyperbolic lines plotted on it, they generally reveal the receiver's location directly, eliminating the need for manual triangulation. As these charts were digitized, they became the first true location-indication navigational systems, outputting the location of the receiver as latitude and longitude. Hyperbolic systems were introduced during World War II and remained the main long-range advanced navigation systems until GPS replaced them in the 1990s. [ citation needed ] The first hyperbolic system to be developed was the British Gee system, developed during World War II . Gee used a series of transmitters sending out precisely timed signals, with the signals leaving the stations at fixed delays. An aircraft using Gee, RAF Bomber Command 's heavy bombers , examined the time of arrival on an oscilloscope at the navigator's station. If the signal from two stations arrived at the same time, the aircraft must be an equal distance from both transmitters, allowing the navigator to determine a line of position on his chart of all the positions at that distance from both stations. More typically, the signal from one station would be received earlier than the other. The difference in timing between the two signals would reveal them to be along a curve of possible locations. By making similar measurements with other stations, additional lines of position can be produced, leading to a fix. Gee was accurate to about 165 yards (150 m) at short ranges, and up to a mile (1.6 km) at longer ranges over Germany. Gee remained in use long after World War II, and equipped RAF aircraft as late as the 1960s (approx freq was by then 68 MHz). [ citation needed ] With Gee entering operation in 1942, similar US efforts were seen to be superfluous. They turned their development efforts towards a much longer-ranged system based on the same principles, using much lower frequencies that allowed coverage across the Atlantic Ocean . The result was LORAN , for "LOng-range Aid to Navigation". The downside to the long-wavelength approach was that accuracy was greatly reduced compared to the high-frequency Gee. LORAN was widely used during convoy operations in the late war period. [ 9 ] Another British system from the same era was Decca Navigator. This differed from Gee primarily in that the signals were not pulses delayed in time, but continuous signals delayed in phase. By comparing the phase of the two signals, the time difference information as Gee was returned. However, this was far easier to display; the system could output the phase angle to a pointer on a dial removing any need for visual interpretation. As the circuitry for driving this display was quite small, Decca systems normally used three such displays, allowing quick and accurate reading of multiple fixes. Decca found its greatest use post-war on ships, and remained in use into the 1990s. [ citation needed ] Almost immediately after the introduction of LORAN, in 1952 work started on a greatly improved version. LORAN-C (the original retroactively became LORAN-A) combined the techniques of pulse timing in Gee with the phase comparison of Decca. [ citation needed ] The resulting system (operating in the low frequency (LF) radio spectrum from 90 to 110 kHz) that was both long-ranged (for 60 kW stations, up to 3400 miles) and accurate. To do this, LORAN-C sent a pulsed signal, but modulated the pulses with an AM signal within it. Gross positioning was determined using the same methods as Gee, locating the receiver within a wide area. Finer accuracy was then provided by measuring the phase difference of the signals, overlaying that second measure on the first. By 1962, high-power LORAN-C was in place in at least 15 countries. [ 10 ] LORAN-C was fairly complex to use, requiring a room of equipment to pull out the different signals. However, with the introduction of integrated circuits , this was quickly reduced further and further. By the late 1970s, LORAN-C units were the size of a stereo amplifier and were commonly found on almost all commercial ships as well as some larger aircraft. By the 1980s, this had been further reduced to the size of a conventional radio, and it became common even on pleasure boats and personal aircraft. It was the most popular navigation system in use through the 1980s and 90s, and its popularity led to many older systems being shut down, like Gee and Decca. However, like the beam systems before it, civilian use of LORAN-C was short-lived when GPS technology drove it from the market. [ citation needed ] Similar hyperbolic systems included the US global-wide VLF / Omega Navigation System , and the similar Alpha deployed by the USSR. These systems determined pulse timing not by comparison of two signals, but by comparison of a single signal with a local atomic clock . The expensive-to-maintain Omega system was shut down in 1997 as the US military migrated to using GPS . Alpha is still in use. [ citation needed ] Since the 1960s, navigation has increasingly moved to satellite navigation systems . These are essentially hyperbolic [ 11 ] [ 12 ] systems whose transmitters are in orbits. That the satellites move with respect to the receiver requires that the calculation of the positions of the satellites must be taken into account, which can only be handled effectively with a computer. [ citation needed ] Satellite navigation systems send several signals that are used to decode the satellite's position, distance between the user satellite, and the user's precise time. One signal encodes the satellite's ephemeris data, which is used to accurately calculate the satellite's location at any time. Space weather and other effects causes the orbit to change over time so the ephemeris has to be updated periodically. Other signals send out the time as measured by the satellite's onboard atomic clock . By measuring signal times of arrival (TOAs) from at least four satellites, the user's receiver can re-build an accurate clock signal of its own and allows hyperbolic navigation to be carried out. [ citation needed ] Satellite navigation systems offer better accuracy than any land-based system, are available at almost all locations on the Earth, can be implemented (receiver-side) at modest cost and complexity, with modern electronics, and require only a few dozen satellites to provide worldwide coverage [ citation needed ] . As a result of these advantages, satellite navigation has led to almost all previous systems falling from use [ citation needed ] . LORAN, Omega, Decca, Consol and many other systems disappeared during the 1990s and 2000s [ citation needed ] . The only other systems still in use are aviation aids, which are also being turned off [ citation needed ] for long-range navigation while new differential GPS systems are being deployed to provide the local accuracy needed for blind landings. [ citation needed ] Radionavigation service (short: RNS ) is – according to Article 1.42 of the International Telecommunication Union's (ITU) Radio Regulations (RR) [ 13 ] – defined as " A radiodetermination service for the purpose of radionavigation , including obstruction warning. " This service is a so-called safety-of-life service , must be protected for Interferences , and is essential part of Navigation . [ citation needed ] This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows: Radiodetermination service (article 1.40) Aeronautical radionavigation service (short: ARNS ) is – according to Article 1.46 of the International Telecommunication Union's (ITU) Radio Regulations (RR) [ 14 ] – defined as " A radionavigation service intended for the benefit and for the safe operation of aircraft ." This service is a so-called safety-of-life service , must be protected against interference , and is an essential part of navigation . Maritime radionavigation service (short: MRNS ) is – according to Article 1.44 of the International Telecommunication Union's (ITU) Radio Regulations (RR) [ 15 ] – defined as " A radionavigation service intended for the benefit and for the safe operation of ships ." This service is a so-called safety-of-life service , must be protected for interferences , and is essential part of navigation . A radionavigation land station is – according to article 1.88 of the International Telecommunication Union´s (ITU) ITU Radio Regulations (RR) [ 16 ] – defined as "A radio station in the radionavigation service not intended to be used while in motion." Each radio station shall be classified by the radiocommunication service in which it operates permanently or temporarily. This station operates in a safety-of-life service and must be protected for Interferences . [ citation needed ] In accordance with ITU Radio Regulations (article 1) this type of radio station might be classified as follows: Radiodetermination station (article 1.86) of the radiodetermination service (article 1.40 ) A radionavigation mobile station is – according to article 1.87 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) [ 17 ] – defined as "A radio station in the radionavigation service intended to be used while in motion or during halts at unspecified points." Each radio station shall be classified by the radiocommunication service in which it operates permanently or temporarily. This station operates in a safety-of-life service and must be protected for Interferences . [ citation needed ] In accordance with ITU Radio Regulations (article 1) this type of radio station might be classified as follows: Radiodetermination station (article 1.86) of the radiodetermination service (article 1.40 )
https://en.wikipedia.org/wiki/Radio_navigation
Radio Objects with Continuous Optical Spectra , (abbr. ROCOS , also referred to as ROCOSes) is a group of about 80 astrophysical objects characterized by optical spectra anomalously devoid of emission or absorption features, which makes it impossible to determine their distances and locations in relation to our galaxy . [ 1 ] [ 2 ] [ 3 ] They are considered to be a subclass of blazars , and are similar in their spectral characteristics to DC-dwarfs and single stellar-mass black holes . [ 4 ] Radio Objects with Continuous Optical Spectra, or ROCOSes, were discovered in the 1970s. [ 1 ] Among the discoverers was a group of Soviet astrophysicists, who studied them at the Crimean Astrophysical Observatory and the Special Astrophysical Observatory of the Russian Academy of Science , using the former's 2.6-meter optical telescope and the latter's 6-meter optical telescope (BTA-6) , along with a 1000-channel photon counter and photometers . [ 1 ] [ 2 ] [ 3 ] The group published their findings in a series of articles in the Russian scientific journals Astronomy Letters and Astronomy Reports . [ 1 ] [ 2 ] An astronomical radio object is classified as a ROCOS if it possesses (a) an optical image with stellar appearance, which is identified with a radio source, and (b) no emission or absorption features in its optical spectrum, except for those due to galactic interstellar medium, with a signal-to-noise ratio at the level of those observable for quasar candidates. [ 1 ] About 8% of the known astronomical radio objects satisfy these two criteria and are considered ROCOSes. [ 2 ] [ 3 ] The absence of distinct emission or absorption lines in the ROCOSes' spectra makes them very similar in this regard to highly polarized quasars (HPQ), BL Lac objects , and single stellar-mass black holes . [ 2 ] [ 4 ] The absence of optical spectral features also makes it impossible to use red shift for determining their distances or even ascertaining if they are located within or outside our galaxy . [ 2 ]
https://en.wikipedia.org/wiki/Radio_object_with_continuous_optical_spectrum
Radio receiver design includes the electronic design of different components of a radio receiver which processes the radio frequency signal from an antenna in order to produce usable information such as audio. The complexity of a modern receiver and the possible range of circuitry and methods employed are more generally covered in electronics and communications engineering . The term radio receiver is understood in this article to mean any device which is intended to receive a radio signal in order to generate useful information from the signal, most notably a recreation of the so-called baseband signal (such as audio) which modulated the radio signal at the time of transmission in a communications or broadcast system. Design of a radio receiver must consider several fundamental criteria to produce a practical result. The main criteria are gain , selectivity , sensitivity , and stability. The receiver must contain a detector to recover the information initially impressed on the radio carrier signal , a process called modulation . [ 1 ] Gain is required because the signal intercepted by an antenna will have a very low power level, on the order of picowatts or femtowatts . To produce an audible signal in a pair of headphones requires this signal to be amplified a trillion-fold or more. The magnitudes of the required gain are so great that the logarithmic unit decibel is preferred - a gain of 1 trillion times the power is 120 decibels, which is a value achieved by many common receivers. Gain is provided by one or more amplifier stages in a receiver design; some of the gain is applied at the radio-frequency part of the system, and the rest at the frequencies used by the recovered information (audio, video, or data signals). Selectivity is the ability to "tune in" to just one station of the many that may be transmitting at any given time. An adjustable bandpass filter is a typical stage of a receiver. A receiver may include several stages of bandpass filters to provide sufficient selectivity. Additionally, the receiver design must provide immunity from spurious signals that may be generated within the receiver that would interfere with the desired signal. Broadcasting transmitters in any given area are assigned frequencies so that receivers can properly select the desired transmission; this is a key factor limiting the number of transmitting stations that can operate in a given area. Sensitivity is the ability to recover the signal from the background noise. Noise is generated in the path between transmitter and receiver, but is also significantly generated in the receiver's own circuits. Inherently, any circuit above absolute zero generates some random noise that adds to the desired signals. In some cases, atmospheric noise is far greater than that produced in the receiver's own circuits, but in some designs, measures such as cryogenic cooling are applied to some stages of the receiver, to prevent signals from being obscured by thermal noise. A very good receiver design may have a noise figure of only a few times the theoretical minimum for the operating temperature and desired signal bandwidth. The objective is to produce a signal-to-noise ratio of the recovered signal sufficient for the intended purpose. This ratio is also often expressed in decibels. A signal-to-noise ratio of 10 dB (signal 10 times as powerful as noise) might be usable for voice communications by experienced operators, but a receiver intended for high-fidelity music reproduction might require 50 dB or higher signal-to-noise ratio. Stability is required in at least two senses. Frequency stability ; the receiver must stay "tuned" to the incoming radio signal and must not "drift" with time or temperature. Additionally, the great magnitude of gain generated must be carefully controlled so that spurious emissions are not produced within the receiver. These would lead to distortion of the recovered information, or, at worst, may radiate signals that interfere with other receivers. The detector stage recovers the information from the radio-frequency signal, and produces the sound, video, or data that was impressed on the carrier wave initially. Detectors may be as simple as an "envelope" detector for amplitude modulation , or may be more complex circuits for more recently developed techniques such as frequency-hopping spread spectrum . While not fundamental to a receiver, automatic gain control is a great convenience to the user, since it automatically compensates for changing received signal levels or different levels produced by different transmitters. Many different approaches and fundamental receiver "block diagrams" have developed to address these several, sometimes contradictory, factors. Once these technical objectives have been achieved, the remaining design process is still complicated by considerations of economics, patent rights, and even fashion. A crystal radio uses no active parts: it is powered only by the radio signal itself, whose detected power feeds headphones in order to be audible at all. In order to achieve even a minimal sensitivity, a crystal radio is limited to low frequencies using a large antenna (usually a long wire). It relies on detection using some sort of semiconductor diode such as the original cat's-whisker diode discovered long before the development of modern semiconductors. A crystal receiver is very simple and can be easy to make or even improvise, for example, the foxhole radio . However, the crystal radio needs a strong RF signal and a long antenna to operate. It displays poor selectivity since it only has one tuned circuit. The tuned radio frequency receiver (TRF) consists of a radio frequency amplifier having one or more stages all tuned to the desired reception frequency. This is followed by a detector, typically an envelope detector using a diode, followed by audio amplification. This was developed after the invention of the triode vacuum tube, greatly improving the reception of radio signals using electronic amplification which had not previously been available. The greatly improved selectivity of the superheterodyne receiver overtook the TRF design in almost all applications, however the TRF design was still used as late as the 1960s among the cheaper "transistor radios" of that era. The reflex receiver was a design from the early 20th century which consists of a single-stage TRF receiver but which used the same amplifying tube to also amplify the audio signal after it had been detected. This was in an era where each tube was a major cost (and consumer of electrical power) so that a substantial increase in the number of passive elements would be seen as preferable to including an additional tube. The design tends to be rather unstable, and is obsolete. The regenerative receiver also had its heyday at the time where adding an active element (vacuum tube) was considered costly. In order to increase the gain of the receiver, positive feedback was used in its single RF amplifier stage; this also increased the selectivity of the receiver well beyond what would be expected from a single tuned circuit. The amount of feedback was quite critical in determining the resulting gain and had to be carefully adjusted by the radio operator. Increasing the feedback beyond a point caused the stage to oscillate at the frequency it was tuned to. Self-oscillation reduced the quality of its reception of an AM (voice) radio signal but made it useful as a CW (Morse code) receiver. The beat signal between the oscillation and the radio signal would produce an audio "beeping" sound. The oscillation of the regenerative receiver could also be a source of local interference. An improved design known as the super-regenerative receiver improved the performance by allowing an oscillation to build up which was then "quenched", with that cycle repeating at a rapid (ultrasonic) rate. From the accompanying schematic for a practical regenerative receiver, one can appreciate its simplicity in relation to a multi-stage TRF receiver, while able to achieve the same level of amplification through the use of positive feedback. In the Direct conversion receiver , the signals from the antenna are only tuned by a single tuned circuit before entering a mixer where they are mixed with a signal from a local oscillator which is tuned to the carrier wave frequency of the transmitted signal. This is unlike the superheterodyne design, where the local oscillator is at an offset frequency. The output of this mixer is thus audio frequency, which is passed through a low pass filter into an audio amplifier which may drive a speaker. For receiving CW ( morse code ) the local oscillator is tuned to a frequency slightly different from that of the transmitter in order to turn the received signal into an audible "beep." Practically all modern receivers are of the superheterodyne design. The RF signal from the antenna may have one stage of amplification to improve the receiver's noise figure , although at lower frequencies this is typically omitted. The RF signal enters a mixer , along with the output of the local oscillator , in order to produce a so-called intermediate frequency (IF) signal. An early optimization of the superheterodyne was to combine the local oscillator and mixer into a single stage called "converter". The local oscillator is tuned to a frequency somewhat higher (or lower) than the intended reception frequency so that the IF signal will be at a particular frequency where it is further amplified in a narrow-band multistage amplifier. Tuning the receiver involves changing the frequency of the local oscillator, with further processing of the signal (especially in relation to increasing the receiver) conveniently done at a single frequency (the IF frequency) thus requiring no further tuning for different stations. Here we show block diagrams for typical superheterodyne receivers for AM and FM broadcast respectively. This particular FM design uses a modern phase locked loop detector, unlike the frequency discriminator or ratio detector used in earlier FM receivers. For single conversion superheterodyne AM receivers designed for medium wave (AM broadcast) the IF is commonly 455 kHz. Most superheterodyne receivers designed for broadcast FM (88 - 108 MHz) use an IF of 10.7 MHz. TV receivers often use intermediate frequencies of about 40 MHz. Some modern multiband receivers actually convert lower frequency bands first to a much higher frequency (VHF) after which a second mixer with a tunable local oscillator and a second IF stage process the signal as above. Software-defined radio (SDR) is a radio communication system where components, that have been traditionally implemented in hardware (e.g. mixers , filters , amplifiers , modulators / demodulators , detectors , etc.) are instead implemented by means of software on a personal computer or embedded system . [ 2 ] While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.
https://en.wikipedia.org/wiki/Radio_receiver_design
A radio repeater is a combination of a radio receiver and a radio transmitter that receives a signal and retransmits it, so that two-way radio signals can cover longer distances. A repeater sited at a high elevation can allow two mobile stations, otherwise out of line-of-sight propagation range of each other, to communicate. [ 1 ] Repeaters are found in professional, commercial, and government mobile radio systems and also in amateur radio . Repeater systems use two different radio frequencies; the mobiles transmit on one frequency, and the repeater station receives those transmission and transmits on a second frequency. Since the repeater must transmit at the same time as the signal is being received, and may even use the same antenna for both transmitting and receiving, frequency-selective filters are required to prevent the receiver from being overloaded by the transmitted signal. Some repeaters use two different frequency bands to provide isolation between input and output or as a convenience. In a communications satellite , a transponder serves a similar function, but the transponder does not necessarily demodulate the relayed signals. A repeater is an automatic radio-relay station, usually located on a mountain top, tall building, or radio tower. It allows communication between two or more bases, mobile or portable stations that are unable to communicate directly with each other due to distance or obstructions between them. The repeater receives on one radio frequency (the "input" frequency), demodulates the signal, and simultaneously re-transmits the information on its "output" frequency. All stations using the repeater transmit on the repeater's input frequency and receive on its output frequency. Since the repeater is usually located at an elevation higher than the other radios using it, their range is greatly extended. Because the transmitter and receiver are on at the same time, isolation must exist to keep the repeater's own transmitter from degrading the repeater receiver. If the repeater transmitter and receiver are not isolated well, the repeater's own transmitter desensitizes the repeater receiver. The problem is similar to being at a rock concert and not being able to hear the weak signal of a conversation over the much stronger signal of the band. In general, isolating the receiver from the transmitter is made easier by maximizing, as much as possible, the separation between input and output frequencies. When operating through a repeater, mobile stations must transmit on a different frequency than the repeater output. Although the repeater site must be capable of simultaneous reception and transmission (on two different frequencies), mobile stations can operate in one mode at a time, alternating between receiving and transmitting; so, mobile stations do not need the bulky, and costly filters required at a repeater site. Mobile stations may have an option to select a "talk around" mode to transmit and receive on the same frequency; this is sometimes used for local communication within range of the mobile units. There is no set rule about spacing of input and output frequencies for all radio repeaters. Any spacing where the designer can get sufficient isolation between receiver and transmitter will work. In some countries, under some radio services, there are agreed-on conventions or separations that are required by the system license. In the case of input and output frequencies in the United States, for example: These are just a few examples. There are many other separations or spacings between input and output frequencies in operational systems. Same band repeaters operate with input and output frequencies in the same frequency band. For example, in US two-way radio, 30–50 MHz is one band and 150–174 MHz is another. A repeater with an input of 33.980 MHz and an output of 46.140 MHz is a same band repeater. In same band repeaters, a central design problem is keeping the repeater's own transmitter from interfering with the receiver. Reducing the coupling between transmitter and input frequency receiver is called isolation . In same-band repeaters, isolation between transmitter and receiver can be created by using a single antenna and a device called a duplexer . The device is a tuned filter connected to the antenna. In this example, consider a type of device called a band-pass duplexer . It allows, or passes, a band, (or a narrow range,) of frequencies. There are two legs to the duplexer filter, one is tuned to pass the input frequency, the other is tuned to pass the output frequency. Both legs of the filter are coupled to the antenna. The repeater receiver is connected to the receive leg while the transmitter is connected to the transmit leg. The duplexer prevents degradation of the receiver sensitivity by the transmitter in two ways. First, the receive leg greatly attenuates the transmitter's carrier at the receiver input (typically by 90-100 dB), preventing the carrier from overloading (blocking) the receiver front end. Second, the transmit leg attenuates the transmitter broadband noise on the receiver frequency, also typically by 90-100 dB. By virtue of the transmitter and receiver being on different frequencies, they can operate at the same time on a single antenna. There is often not enough tower space to accommodate a separate antenna for each repeater at crowded equipment sites. In same-band repeaters at engineered, shared equipment sites, repeaters can be connected to shared antenna systems. These are common in trunked systems , where up to 29 repeaters for a single trunked system may be located at the same site. (Some architectures such as iDEN sites may have more than 29 repeaters.) In a shared system, a receive antenna is usually located at the top of the antenna tower. Putting the receive antenna at the top helps to capture weaker received signals than if the receive antenna were lower of the two. By splitting the received signal from the antenna, many receivers can work satisfactorily from a single antenna. Devices called receiver multicouplers split the signal from the antenna into many receiver connections. The multicoupler amplifies the signals reaching the antenna, then feeds them to several receivers, attempting to make up for losses in the power dividers (or splitters). These operate similarly to a cable TV splitter but must be built to higher quality standards so they work in environments where strong interfering signals are present. On the transmitter side, a transmit antenna is installed somewhere below the receive antenna. There is an electrical relationship defined by the distance between transmit and receive antennas. A desirable null exists if the transmit antenna is located exactly below the receive antenna beyond a minimum distance. Almost the same isolation as a low-grade duplexer (about −60 decibels) can be accomplished by installing the transmit antenna below, and along the centerline of, the receive antenna. Several transmitters can be connected to the same antenna using filters called combiners . Transmitters usually have directional devices installed along with the filters that block any reflected power in the event the antenna malfunctions. The antenna must have a power rating that will handle the sum of energy of all connected transmitters at the same time. Transmitter combining systems are lossy. As a rule of thumb, each leg of the combiner has a 50% (3 decibel) power loss. If two transmitters are connected to a single antenna through a combiner, half of their power will reach the combiner output. (This assumes everything is working properly.) If four transmitters are coupled to one antenna, a quarter of each transmitter's power will reach the output of the combining circuit. Part of this loss can be made up with increased antenna gain. Fifty watts of transmitter power to the antenna will make a received signal strength at a distant mobile radio that is almost identical to 100 watts. In trunked systems with many channels, a site design may include several transmit antennas to reduce combining network losses. For example, a six-channel trunked system may have two transmit antennas with three transmitters connected to each of the two transmit antennas. Because small variations affect every antenna, each antenna will have a slightly different directional pattern. Each antenna will interact with the tower and other nearby antennas differently. If one were to measure received signal levels, this would cause a variation among channels on a single trunked system. Variations in signal strength among channels on one trunked system can also be caused by: Cross-band repeaters are sometimes a part of government trunked radio systems. If one community is on a trunked system and the neighboring community is on a conventional system, a talk group or agency-fleet-subfleet may be designated to communicate with the other community. In an example where the community is on 153.755 MHz, transmitting on the trunked system talk group would repeat on 153.755 MHz. Signals received by a base station on 153.755 MHz would go over the trunked system on an assigned talk group. In conventional government systems, cross band repeaters are sometimes used to connect two agencies who use radio systems on different bands. For example, a fire department in Colorado was on a 46 MHz channel while a police department was on a 154 MHz channel, they built a cross-band repeater to allow communication between the two agencies. If one of the systems is simplex, the repeater must have logic preventing transmitter keying in both directions at the same time. Voting comparators with a transmitter keying matrix are sometimes used to connect incompatible base stations. In looking at records of old systems, examples of cross-band commercial systems were found in every U.S. radio service where regulations allowed them. In California, specific systems using cross-band repeaters have existed at least since the 1960s. Historic examples of cross-band systems include: [ 3 ] In commercial systems, manufacturers stopped making cross band mobile radio equipment with acceptable specifications for public safety systems in the early 1980s. At the time, some systems were dismantled because new radio equipment was not available. Sporadic E ionospheric ducting can make the 46 MHz and below frequencies unworkable in summer. For decades, cross-band repeaters have been used as fixed links. The links can be used for remote control of base stations at distant sites or to send audio from a diversity (voting) receiver site back to the diversity combining system (voting comparator). Some legacy links occur in the US 150–170 MHz band. US Federal Communications Commission rule changes did not allow 150 MHz links after the 1970s. Newer links are more often seen on 72–76 MHz (Mid-band), 450–470 MHz interstitial channels, or 900 MHz links. These links, known as fixed stations in US licensing, typically connect an equipment site with a dispatching office. Modern amateur radios sometimes include cross-band repeat capability native to the radio transceiver. In commercial systems, cross-band repeaters are sometimes used in vehicular repeaters. For example, a 150 MHz hand held may communicate to a vehicle-mounted low-power transceiver. The low-power radio repeats transmissions from the portable over the vehicle's high power mobile radio, which has a much longer range. In these systems, the hand-held works so long as it is within range of the low power mobile repeater. The mobile radio is usually on a different band than the hand-held to reduce the chances of the mobile radio transmitter interfering with the transmission from the hand-held to the vehicle. There is a difficult engineering problem with these systems. If you get two vehicle radios at the same location, some protocol has to be established so that one portable transmitting doesn't activate two or more mobile radio transmitters. Motorola uses a hierarchy system with PAC*RT, each repeater transmits a tone when it is turned on, so the last one on site that turns on is the one that gets used. This is so several of them are not on at once. Vehicular repeaters are complex but can be less expensive than designing a system that covers a large area and works with the weak signal levels of hand-held radios. Some models of radio signals suggest that the transmitters of hand-held radios create received signals at the base station one to two orders of magnitude (10 to 20 decibels or 10 to 100 times) weaker than a mobile radio with a similar transmitter output power. Radio repeaters are typically placed in locations which maximize their effectiveness for their intended purpose: Popular mainly in the UK, community based radio systems usually consist of a community radio repeater (similar to a ham repeater ), for use by the community and businesses often used for Civic Events , Shopwatch , PubWatch , Neighborhood Watch and Community engagement . In larger towns, separate systems are typically used, separating commercial and community use. Whereas in smaller towns, single systems are typically used by the whole community.
https://en.wikipedia.org/wiki/Radio_repeater
Radio spectrum pollution is the straying of waves in the radio and electromagnetic spectrums outside their allocations that cause problems for some activities. [ 1 ] It is of particular concern to radio astronomers . Radio spectrum pollution is mitigated by effective spectrum management. Within the United States, the Communications Act of 1934 grants authority for spectrum management to the President for all federal use (47 U.S.C. 305). The National Telecommunications and Information Administration (NTIA) manages the spectrum for the Federal Government. Its rules are found in the " NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management ". The Federal Communications Commission (FCC) manages and regulates all domestic non-federal spectrum use (47 U.S.C. 301). [ 2 ] Each country typically has its own spectrum regulatory organization. Internationally, the International Telecommunication Union (ITU) coordinates spectrum policy. [ 1 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio_spectrum_pollution
The radio spectrum scope (also radio panoramic receiver , panoramic adapter , pan receiver , pan adapter , panadapter , panoramic radio spectroscope , panoramoscope , panalyzor and band scope ) was invented by Marcel Wallace - and measures and shows the magnitude of an input signal versus frequency within one or more radio bands - e.g. shortwave bands . [ 1 ] [ 2 ] A spectrum scope is normally a lot cheaper than a spectrum analyzer , because the aim is not high quality frequency resolution - nor high quality signal strength measurements. The spectrum scope use can be to: Modern spectrum scopes, like the Elecraft P3, also plot signal frequencies and amplitudes over time, in a rolling format called a waterfall plot . This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radio_spectrum_scope
A radio telescope is a specialized antenna and radio receiver used to detect radio waves from astronomical radio sources in the sky. [ 1 ] [ 2 ] [ 3 ] Radio telescopes are the main observing instrument used in radio astronomy , which studies the radio frequency portion of the electromagnetic spectrum , just as optical telescopes are used to make observations in the visible portion of the spectrum in traditional optical astronomy . Unlike optical telescopes, radio telescopes can be used in the daytime as well as at night. Since astronomical radio sources such as planets , stars , nebulas and galaxies are very far away, the radio waves coming from them are extremely weak, so radio telescopes require very large antennas to collect enough radio energy to study them, and extremely sensitive receiving equipment. Radio telescopes are typically large parabolic ("dish") antennas similar to those employed in tracking and communicating with satellites and space probes. They may be used individually or linked together electronically in an array. Radio observatories are preferentially located far from major centers of population to avoid electromagnetic interference (EMI) from radio, television , radar , motor vehicles, and other man-made electronic devices. Radio waves from space were first detected by engineer Karl Guthe Jansky in 1932 at Bell Telephone Laboratories in Holmdel, New Jersey using an antenna built to study radio receiver noise. The first purpose-built radio telescope was a 9-meter parabolic dish constructed by radio amateur Grote Reber in his back yard in Wheaton, Illinois in 1937. The sky survey he performed is often considered the beginning of the field of radio astronomy. The first radio antenna used to identify an astronomical radio source was built by Karl Guthe Jansky , an engineer with Bell Telephone Laboratories , in 1932. Jansky was assigned the task of identifying sources of static that might interfere with radiotelephone service. Jansky's antenna was an array of dipoles and reflectors designed to receive short wave radio signals at a frequency of 20.5 MHz (wavelength about 14.6 meters). It was mounted on a turntable that allowed it to rotate in any direction, earning it the name "Jansky's merry-go-round." It had a diameter of approximately 100 ft (30 m) and stood 20 ft (6 m) tall. By rotating the antenna, the direction of the received interfering radio source (static) could be pinpointed. A small shed to the side of the antenna housed an analog pen-and-paper recording system. After recording signals from all directions for several months, Jansky eventually categorized them into three types of static: nearby thunderstorms, distant thunderstorms, and a faint steady hiss above shot noise , of unknown origin. Jansky finally determined that the "faint hiss" repeated on a cycle of 23 hours and 56 minutes. This period is the length of an astronomical sidereal day , the time it takes any "fixed" object located on the celestial sphere to come back to the same location in the sky. Thus Jansky suspected that the hiss originated outside of the Solar System , and by comparing his observations with optical astronomical maps, Jansky concluded that the radiation was coming from the Milky Way Galaxy and was strongest in the direction of the center of the galaxy, in the constellation of Sagittarius . An amateur radio operator, Grote Reber , was one of the pioneers of what became known as radio astronomy . He built the first parabolic "dish" radio telescope, 9 metres (30 ft) in diameter, in his back yard in Wheaton, Illinois in 1937. He repeated Jansky's pioneering work, identifying the Milky Way as the first off-world radio source, and he went on to conduct the first sky survey at very high radio frequencies, discovering other radio sources. The rapid development of radar during World War II created technology which was applied to radio astronomy after the war, and radio astronomy became a branch of astronomy, with universities and research institutes constructing large radio telescopes. [ 4 ] The range of frequencies in the electromagnetic spectrum that makes up the radio spectrum is very large. As a consequence, the types of antennas that are used as radio telescopes vary widely in design, size, and configuration. At wavelengths of 30 meters to 3 meters (10–100 MHz), they are generally either directional antenna arrays similar to "TV antennas" or large stationary reflectors with movable focal points. Since the wavelengths being observed with these types of antennas are so long, the "reflector" surfaces can be constructed from coarse wire mesh such as chicken wire . [ 5 ] [ 6 ] At shorter wavelengths parabolic "dish" antennas predominate. The angular resolution of a dish antenna is determined by the ratio of the diameter of the dish to the wavelength of the radio waves being observed. This dictates the dish size a radio telescope needs for a useful resolution. Radio telescopes that operate at wavelengths of 3 meters to 30 cm (100 MHz to 1 GHz) are usually well over 100 meters in diameter. Telescopes working at wavelengths shorter than 30 cm (above 1 GHz) range in size from 3 to 90 meters in diameter. [ citation needed ] The increasing use of radio frequencies for communication makes astronomical observations more and more difficult (see Open spectrum ). Negotiations to defend the frequency allocation for parts of the spectrum most useful for observing the universe are coordinated in the Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science. Some of the more notable frequency bands used by radio telescopes include: The world's largest filled-aperture (i.e. full dish) radio telescope is the Five-hundred-meter Aperture Spherical Telescope (FAST) completed in 2016 by China . [ 8 ] The 500-meter-diameter (1,600 ft) dish with an area as large as 30 football fields is built into a natural karst depression in the landscape in Guizhou province and cannot move; the feed antenna is in a cabin suspended above the dish on cables. The active dish is composed of 4,450 moveable panels controlled by a computer. By changing the shape of the dish and moving the feed cabin on its cables, the telescope can be steered to point to any region of the sky up to 40° from the zenith. Although the dish is 500 meters in diameter, only a 300-meter circular area on the dish is illuminated by the feed antenna at any given time, so the actual effective aperture is 300 meters. Construction began in 2007 and was completed July 2016 [ 9 ] and the telescope became operational September 25, 2016. [ 10 ] The world's second largest filled-aperture telescope was the Arecibo radio telescope located in Arecibo, Puerto Rico , though it suffered catastrophic collapse on 1 December 2020. Arecibo was one of the world's few radio telescope also capable of active (i.e., transmitting) radar imaging of near-Earth objects (see: radar astronomy ); most other telescopes employ passive detection, i.e., receiving only. Arecibo was another stationary dish telescope like FAST. Arecibo's 305 m (1,001 ft) dish was built into a natural depression in the landscape, the antenna was steerable within an angle of about 20° of the zenith by moving the suspended feed antenna , giving use of a 270-meter diameter portion of the dish for any individual observation. The largest individual radio telescope of any kind is the RATAN-600 located near Nizhny Arkhyz , Russia , which consists of a 576-meter circle of rectangular radio reflectors, each of which can be pointed towards a central conical receiver. The above stationary dishes are not fully "steerable"; they can only be aimed at points in an area of the sky near the zenith , and cannot receive from sources near the horizon. The largest fully steerable dish radio telescope is the 100 meter Green Bank Telescope in West Virginia , United States, constructed in 2000. The largest fully steerable radio telescope in Europe is the Effelsberg 100-m Radio Telescope near Bonn , Germany, operated by the Max Planck Institute for Radio Astronomy , which also was the world's largest fully steerable telescope for 30 years until the Green Bank antenna was constructed. [ 11 ] The third-largest fully steerable radio telescope is the 76-meter Lovell Telescope at Jodrell Bank Observatory in Cheshire , England, completed in 1957. The fourth-largest fully steerable radio telescopes are six 70-meter dishes: three Russian RT-70 , and three in the NASA Deep Space Network . The planned Qitai Radio Telescope , at a diameter of 110 m (360 ft), is expected to become the world's largest fully steerable single-dish radio telescope when completed in 2028. A more typical radio telescope has a single antenna of about 25 meters diameter. Dozens of radio telescopes of about this size are operated in radio observatories all over the world. Since 1965, humans have launched three space-based radio telescopes. The first one, KRT-10, was attached to Salyut 6 orbital space station in 1979. In 1997, Japan sent the second, HALCA . The last one was sent by Russia in 2011 called Spektr-R . One of the most notable developments came in 1946 with the introduction of the technique called astronomical interferometry , which means combining the signals from multiple antennas so that they simulate a larger antenna, in order to achieve greater resolution. Astronomical radio interferometers usually consist either of arrays of parabolic dishes (e.g., the One-Mile Telescope ), arrays of one-dimensional antennas (e.g., the Molonglo Observatory Synthesis Telescope ) or two-dimensional arrays of omnidirectional dipoles (e.g., Tony Hewish's Pulsar Array ). All of the telescopes in the array are widely separated and are usually connected using coaxial cable , waveguide , optical fiber , or other type of transmission line . Recent advances in the stability of electronic oscillators also now permit interferometry to be carried out by independent recording of the signals at the various antennas, and then later correlating the recordings at some central processing facility. This process is known as Very Long Baseline Interferometry (VLBI) . Interferometry does increase the total signal collected, but its primary purpose is to vastly increase the resolution through a process called aperture synthesis . This technique works by superposing ( interfering ) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas furthest apart in the array. A high-quality image requires a large number of different separations between telescopes. Projected separation between any two telescopes, as seen from the radio source, is called a baseline. For example, the Very Large Array (VLA) near Socorro, New Mexico has 27 telescopes with 351 independent baselines at once, which achieves a resolution of 0.2 arc seconds at 3 cm wavelengths. [ 12 ] Martin Ryle 's group in Cambridge obtained a Nobel Prize for interferometry and aperture synthesis. [ 13 ] The Lloyd's mirror interferometer was also developed independently in 1946 by Joseph Pawsey 's group at the University of Sydney . [ 14 ] In the early 1950s, the Cambridge Interferometer mapped the radio sky to produce the famous 2C and 3C surveys of radio sources. An example of a large physically connected radio telescope array is the Giant Metrewave Radio Telescope , located in Pune , India . The largest array, the Low-Frequency Array (LOFAR), finished in 2012, is located in western Europe and consists of about 81,000 small antennas in 48 stations distributed over an area several hundreds of kilometers in diameter and operates between 1.25 and 30 m wavelengths. VLBI systems using post-observation processing have been constructed with antennas thousands of miles apart. Radio interferometers have also been used to obtain detailed images of the anisotropies and the polarization of the Cosmic Microwave Background , like the CBI interferometer in 2004. The world's largest physically connected telescope, the Square Kilometre Array (SKA), is planned to start operations in 2027, [ 15 ] Although the first stations had "first fringes" in 2024. [ 16 ] Many astronomical objects are not only observable in visible light but also emit radiation at radio wavelengths . Besides observing energetic objects such as pulsars and quasars , radio telescopes are able to "image" most astronomical objects such as galaxies , nebulae , and even radio emissions from planets . [ 17 ] [ 18 ]
https://en.wikipedia.org/wiki/Radio_telescope
A radio transmitter or just transmitter is an electronic device which produces radio waves with an antenna . Radio waves are electromagnetic waves with frequencies between about 30 Hz and 300 GHz . The transmitter itself generates a radio frequency alternating current , which is applied to the antenna. When excited by this alternating current, the antenna radiates radio waves. Transmitters are necessary parts of all systems that use radio : radio and television broadcasting , cell phones , wireless networks , radar , two way radios like walkie talkies , radio navigation systems like GPS , remote entry systems , among numerous other uses. A transmitter can be a separate piece of equipment, or an electronic circuit within another device. Most transmitters consist of an electronic oscillator which generates an oscillating carrier wave , a modulator which impresses an information bearing modulation signal on the carrier, and an amplifier which increases the power of the signal. To prevent interference between different users of the radio spectrum , transmitters are strictly regulated by national radio laws, and are restricted to certain frequencies and power levels, depending on use. The design must typically be certificated (formerly type approved ) before sale. An important legal requirement is that the circuit does not radiate significant radio wave power outside its assigned frequency band, called spurious emission . A radio transmitter design has to meet certain requirements. These include the frequency of operation , the type of modulation , the stability and purity of the resulting signal, the efficiency of power use, and the power level required to meet the system design objectives. [ 1 ] High-power transmitters may have additional constraints with respect to radiation safety, generation of X-rays, and protection from high voltages. [ 2 ] Typically a transmitter design includes generation of a carrier signal , which is normally [ 3 ] sinusoidal , optionally one or more frequency multiplication stages, a modulator, a power amplifier, and a filter and matching network to connect to an antenna. A very simple transmitter might contain only a continuously running oscillator coupled to some antenna system. More elaborate transmitters allow better control over the modulation of the emitted signal and improve the stability of the transmitted frequency. For example, the Master Oscillator-Power Amplifier (MOPA) configuration inserts an amplifier stage between the oscillator and the antenna. This prevents changes in the loading presented by the antenna from altering the frequency of the oscillator. [ 4 ] For a fixed frequency transmitter one commonly used method is to use a resonant quartz crystal in a crystal oscillator to fix the frequency. Where the frequency has to be variable, several options can be used. While modern frequency synthesizers can output a clean stable signal up through UHF, for many years, especially at higher frequencies, it was not practical to operate the oscillator at the final output frequency. For better frequency stability, it was common to multiply the frequency of the oscillator up to the final, required frequency. This was accommodated by allocating the short wave amateur and marine bands in harmonically related frequencies such as 3.5, 7, 14 and 28 MHz. Thus one crystal or VFO could cover several bands. In simple equipment this approach is still used occasionally. If the output of an amplifier stage is simply tuned to a multiple of the frequency with which the stage is driven, the stage will give a large harmonic output. Many transmitters have used this simple approach successfully. However these more complex circuits will do a better job. In a push-push stage, the output will only contain even harmonics. This is because the currents which would generate the fundamental and the odd harmonics in this circuit are canceled by the second device. In a push-pull stage, the output will contain only odd harmonics because of the canceling effect. The task of a transmitter is to convey some form of information using a radio signal (carrier wave) which has been modulated to carry the information. The RF generator in a microwave oven , electrosurgery , and induction heating are similar in design to transmitters, but usually not considered as such in that they do not intentionally produce a signal that will travel to a distant point. Such RF devices are required by law to operate in an ISM band where interference to radio communications will not occur. Where communications is the object, one or more of the following methods of incorporating the desired signal into the radio wave is used. When a radio frequency wave is varied in amplitude in a manner which follows the modulating signal, usually voice, video or data, we have Amplitude modulation (AM). In low level modulation a small audio stage is used to modulate a low power stage. The output of this stage is then amplified using a linear RF amplifier. The great disadvantage of this system is that the amplifier chain is less efficient , because it has to be linear to preserve the modulation. Hence high efficiency class C amplifiers cannot be employed, unless a Doherty amplifier , EER (Envelope Elimination and Restoration) or other methods of predistortion or negative feedback are used. High level modulation uses class C amplifiers in a broadcast AM transmitter and only the final stage or final two stages are modulated, and all the earlier stages can be driven at a constant level. When modulation is applied to the plate of the final tube, a large audio amplifier is needed for the modulation stage, equal to 1/2 of the DC input power of the modulated stage. Traditionally the modulation is applied using a large audio transformer. However many different circuits have been used for high level AM modulation. See Amplitude Modulation . A wide range of different circuits have been used for AM. While it is perfectly possible to create good designs using solid-state electronics, valved (tube) circuits are shown here. In general, valves are able to easily yield RF powers far in excess of what can be achieved using solid state. Most high-power broadcast stations below 3 MHz use solid state circuits, but higher power stations above 3 MHz still use valves. High level plate modulation consists of varying the voltage on the plate (anode) of the valve so that it swings from nearly zero to double the resting value. This will produce 100% modulation and can be done by inserting a transformer in series with the high voltage supply to the anode so that the vector sum of the two sources, (DC and audio) will be applied. A disadvantage is the size, weight and cost of the transformer as well as its limited audio frequency response, especially for very powerful transmitters. Alternatively a series regulator can be inserted between the DC supply and the anode. The DC supply provides twice the average voltage the anode sees. The regulator can allow none or all of the voltage to pass, or any intermediate value. The audio input operates the regulator in such a way as to produce the instantaneous anode voltage needed to reproduce the modulation envelope. An advantage of the series regulator is that it can set the anode voltage to any desired value. Thus the power output of the transmitter can be easily adjusted, allowing the use of dynamic carrier control . The use of PDM switching regulators makes this system very efficient, whereas the original analog regulators were very inefficient and also non linear. Series PDM modulators are used in solid state transmitters also, but the circuits are somewhat more complex, using push pull or bridge circuits for the RF section. These simplified diagrams omit such details as filament, screen and grid bias supplies, and the screen and cathode connections to RF ground. Under carrier conditions (no audio) the stage will be a simple RF amplifier where the screen voltage is set lower than normal to limit the RF output to about 25% of full power. When the stage is modulated the screen potential changes and so alters the gain of the stage. It takes much less audio power to modulate the screen, but final stage efficiency is only about 40%, compared to 80% with plate modulation. For this reason screen modulation was used only in low power transmitters and is now effectively obsolete. Several derivatives of AM are in common use. These are SSB, or SSB-AM single-sideband full carrier modulation, is very similar to single-sideband suppressed carrier modulation (SSB-SC). It is used where it is necessary to receive the audio on an AM receiver, while using less bandwidth than with double sideband AM. Due to high distortion, it is seldom used. Either SSB-AM or SSB-SC are produced by the following methods. Using a balanced mixer a double side band signal is generated, this is then passed through a very narrow bandpass filter to leave only one side-band. [ 5 ] By convention it is normal to use the upper sideband (USB) in communication systems, except for amateur radio when the carrier frequency is below 10 MHz. There the lower side band (LSB) is normally used. The phasing method for the generation of single sideband signals uses a network which imposes a constant 90° phase shift on audio signals over the audio range of interest. This was difficult with analog methods but with DSP is very simple. These audio outputs are each mixed in a linear balanced mixer with a carrier. The carrier drive for one of these mixers is also shifted by 90°. The outputs of these mixers are added in a linear circuit to give the SSB signal by phase cancellation of one of the sidebands. Connecting the 90° delayed signal from either the audio or the carrier (but not both) to the other mixer will reverse the sideband, so either USB or LSB is available with a simple DPDT switch. Vestigial-sideband modulation (VSB, or VSB-AM) is a type of modulation system commonly used in analogue TV systems. It is normal AM which has been passed through a filter which reduces one of the sidebands. Typically, components of the lower sideband more than 0.75 MHz or 1.25 MHz below the carrier will be heavily attenuated. Morse code is usually sent using on-off keying of an unmodulated carrier ( Continuous wave ). No special modulator is required. This interrupted carrier may be analyzed as an AM-modulated carrier. On-off keying produces sidebands, as expected, but they are referred to as "key-clicks". Shaping circuits are used to turn the transmitter on and off smoothly instead of instantly in order to limit the bandwidth of these sidebands and reduce interference to adjacent channels. Angle modulation is the proper term for modulation by changing the instantaneous frequency or phase of the carrier signal. True FM and phase modulation are the most commonly employed forms of analogue angle modulation. Direct FM (true Frequency modulation ) is where the frequency of an oscillator is altered to impose the modulation upon the carrier wave. This can be done by using a voltage-controlled capacitor ( varicap diode ) in a crystal-controlled oscillator or frequency synthesiser . The frequency of the oscillator is then multiplied up using a frequency multiplier stage, or is translated upwards using a mixing stage, to the output frequency of the transmitter. The amount of modulation is referred to as the deviation , being the amount that the frequency of the carrier instantaneously deviates from the centre carrier frequency. Indirect FM employs a varicap diode to impose a phase shift (which is voltage-controlled) in a tuned circuit that is fed with a plain carrier. This is termed phase modulation . In some indirect FM solid state circuits, an RF drive is applied to the base of a transistor . The tank circuit (LC), connected to the collector via a capacitor, contains a pair of varicap diodes. As the voltage applied to the varicaps is changed, the phase shift of the output will change. Phase modulation is mathematically equivalent to direct Frequency modulation with a 6 dB/octave high-pass filter applied to the modulating signal. This high-pass effect can be exploited or compensated for using suitable frequency-shaping circuitry in the audio stages ahead of the modulator. For example, many FM systems will employ pre-emphasis and de-emphasis for noise reduction, in which case the high-pass equivalency of phase modulation automatically provides for the pre-emphasis. Phase modulators are typically only capable of relatively small amounts of deviation while remaining linear, but any frequency multiplier stages also multiply the deviation in proportion. Transmission of digital data is becoming more and more important. Digital information can be transmitted by AM and FM modulation, but often digital modulation consists of complex forms of modulation using aspects of both AM and FM. COFDM is used for DRM broadcasts. The transmitted signal consists of multiple carriers each modulated in both amplitude and phase. This allows very high bit rates and makes very efficient use of bandwidth. Digital or pulse methods also are used to transmit voice as in cell phones, or video as in terrestrial TV broadcasting. Early text messaging such as RTTY allowed the use of class C amplifiers, but modern digital modes require linear amplification. See also Sigma-delta modulation (ΣΔ) For high power, high frequency systems it is normal to use valves, see Valve RF amplifier for details of how valved RF power stages work. Valves are electrically very robust, they can tolerate overloads which would destroy bipolar transistor systems in milliseconds. As a result, valved amplifiers may resist mistuning, lightning and power surges better. However, they require a heated cathode which consumes power and will fail in time due to loss of emission or heater burn out. The high voltages associated with valve circuits are dangerous to persons. For economic reasons, valves continue to be used for the final power amplifier for transmitters operating above 1.8 MHz and with powers above about 500 watts for amateur use and above about 10 kW for broadcast use. Solid state devices, either discrete transistors or integrated circuits, are universally used for new transmitter designs up to a few hundred watts. The lower level stages of more powerful transmitters are also all solid state. Transistors can be used at all frequencies and power levels, but since the output of individual devices is limited, higher power transmitters must use many transistors in parallel, and the cost of the devices and the necessary combining networks can be excessive. As new transistor types become available and the price drops, solid state may eventually replace all valve amplifiers. The majority of modern transmitting equipment is designed to operate with a resistive load fed via coaxial cable of a particular characteristic impedance , often 50 ohms . To connect the power stage of the transmitter to this coaxial cable transmission line a matching network is required. For solid state transmitters this is typically a broadband transformer which steps up the low impedance of the output devices to 50 ohms. A tube transmitter will contain a tuned output network, most commonly a PI network, that steps the load impedance which the tube requires down to 50 ohms. In each case the power producing devices will not transfer power efficiently if the network is detuned or badly designed or if the antenna presents other than 50 ohms at the transmitter output. Commonly an SWR meter and/or directional wattmeter are used to check the extent of the match between the aerial system and the transmitter via the transmission line (feeder). A directional wattmeter indicates forward power, reflected power, and often SWR as well. Each transmitter will specify a maximum allowable mismatch based on efficiency, distortion, and possible damage to the transmitter. Many transmitters have automatic circuits to reduce power or shut down if this value is exceeded. Transmitters feeding a balanced transmission line will need a balun . This transforms the single ended output of the transmitter to a higher impedance balanced output. High power short wave transmission systems typically use 300 ohm balanced lines between the transmitter and antenna. Amateurs often use 300–450 ohm balanced antenna feeders. See Antenna tuner and balun for details of matching networks and baluns respectively. Many devices depend on the transmission and reception of radio waves for their operation. The possibility for mutual interference is great. Many devices not intended to transmit signals may do so. For instance a dielectric heater might contain a 2000 watt 27 MHz source within it. If the machine operates as intended then none of this RF power will leak out. However, if due to poor design or maintenance it allows RF to leak out, it will become a transmitter or unintentional radiator. All equipment using RF electronics should be inside a screened conductive box and all connections in or out of the box should be filtered to avoid the passage of radio signals. A common and effective method of doing so for wires carrying DC supplies, 50/60 Hz AC connections, audio and control signals is to use a feedthrough capacitor , whose job is to short circuit any RF on the wire to ground. The use of ferrite beads is also common. If an intentional transmitter produces interference, then it should be run into a dummy load ; this is a resistor in a screened box or can which will allow the transmitter to generate radio signals without sending them to the antenna. If the transmitter continues to cause interference during this test then a path exists by which RF power is leaking out of the equipment and this can be due to bad shielding . Such leakage is most likely to occur on homemade equipment or equipment that has been modified or had covers removed. RF leakage from microwave ovens , while rare, may occur due to defective door seals, and may be a health hazard. Early in the development of radio technology it was recognized that the signals emitted by transmitters had to be 'pure'. Spark-gap transmitters were outlawed once better technology was available as they give an output which is very wide in terms of frequency. The term spurious emissions refers to any signal which comes out of a transmitter other than the wanted signal. In modern equipment there are three main types of spurious emissions: harmonics , out of band mixer products which are not fully suppressed and leakage from the local oscillator and other systems within the transmitter. These are multiples of the operation frequency of the transmitter, they can be generated in any stage of the transmitter which is not perfectly linear and must be removed by filtering. The difficulty of removing harmonics from an amplifier will depend on the design. A push-pull amplifier will have fewer harmonics than a single ended circuit. A class A amplifier will have very few harmonics, class AB or B more, and class C the most. In the typical class C amplifier, the resonant tank circuit will remove most of the harmonics, but in either of these examples, a low pass filter will likely be needed following the amplifier. In addition to the good design of the amplifier stages, the transmitter's output should be filtered with a low-pass filter to reduce the level of the harmonics. Typically the input and output are interchangeable and match to 50 ohms. Inductance and capacity values will vary with frequency. Many transmitters switch in a suitable filter for the frequency band being used. The filter will pass the desired frequency and reduce all harmonics to acceptable levels. The harmonic output of a transmitter is best checked using an RF spectrum analyzer or by tuning a receiver to the various harmonics. If a harmonic falls on a frequency being used by another communications service then this spurious emission can prevent an important signal from being received. Sometimes additional filtering is used to protect a sensitive range of frequencies, for example, frequencies used by aircraft or services involved with protection of life and property. Even if a harmonic is within the legally allowed limits, the harmonic should be further reduced. When mixing signals to produce a desired output frequency, the choice of Intermediate frequency and local oscillator is important. If poorly chosen, a spurious output can be generated. For example, if 50 MHz is mixed with 94 MHz to produce an output on 144 MHz, the third harmonic of the 50 MHz may appear in the output. This problem is similar to the Image response problem which exists in receivers. One method of reducing the potential for this transmitter defect is the use of balanced and double balanced mixers. A simple mixer will pass both of the input frequencies and all of their harmonics along with the sum and difference frequencies. If the simple mixer is replaced with a balanced mixer then the number of possible products is reduced. If the frequency mixer has fewer outputs the task of making sure that the final output is clean will be simpler. If a stage in a transmitter is unstable and is able to oscillate then it can start to generate RF at either a frequency close to the operating frequency or at a very different frequency. One good sign that it is occurring is if an RF stage has a power output even without being driven by an exciting stage. Output power should increase smoothly as input power is increased, although with Class C, there will be a noticeable threshold effect. Various circuits are used for parasitic suppression in a good design. Proper neutralization is also important. The simplest transmitters such as RFID devices require no external controls. Simple tracking transmitters may have only an on-off switch. Many transmitters must have circuits that allow them to be turned on and off and the power output and frequency adjusted or modulation levels adjusted. Many modern multi-featured transmitters allow the adjustment of many different parameters. Usually these are under microprocessor control via multilevel menus, thus reducing the required number of physical knobs. Often a display screen provides feedback to the operator to assist in adjustments. The user friendliness of this interface will often be one of the main factors in a successful design. Microprocessor controlled transmitters also may include software to prevent off frequency or other illegal operation. Transmitters using significant power or expensive components must also have protection circuits which prevent such things as overload, overheating or other abuse of the circuits. Overload circuits may include mechanical relays, or electronic circuits. Simple fuses may be included to protect expensive components. Arc detectors may shut off the transmitter when sparks or fires occur. Protection features must also prevent the human operator and the public from encountering the high voltages and power which exist inside the transmitter. Tube transmitters typically use DC voltages between 600 and 30,000 volts, which are deadly if contacted. Radio frequency power above about 10 watts can cause burning of human tissue through contact and higher power can actually cook human flesh without contact. Metal shielding is required to isolate these dangers. Properly designed transmitters have doors or panels which are interlocked, so that open doors activate switches which do not allow the transmitter to be turned on when the dangerous areas are exposed. In addition, either resistors which bleed off the high voltages or shorting relays are employed to insure that capacitors do not retain a dangerous charge after turn off. With large high power transmitters, the protective circuits can comprise a significant fraction of the total design complexity and cost. Some RFID devices take power from an external source when it interrogates the device, but most transmitters either have self-contained batteries, or are mobile systems which typically operate directly from the 12 volt vehicle battery. Larger fixed transmitters will require power from the mains. The voltages used by a transmitter will be AC and DC of many different values. Either AC transformers or DC power supplies are required to provide the values of voltage and current needed to operate the various circuits. Some of these voltages will need to be regulated. Thus a significant part of the total design will consist of power supplies. Power supplies will be integrated into the control and protection systems of the transmitter, which will turn them on in the proper sequence and protect them from overloads. Often rather complicated logic systems will be required for these functions.
https://en.wikipedia.org/wiki/Radio_transmitter_design
Radioactive contamination , also called radiological pollution , is the deposition of, or presence of radioactive substances on surfaces or within solids, liquids, or gases (including the human body), where their presence is unintended or undesirable (from the International Atomic Energy Agency (IAEA) definition). [ 3 ] Such contamination presents a hazard because the radioactive decay of the contaminants produces ionizing radiation (namely alpha , beta , gamma rays and free neutrons ). The degree of hazard is determined by the concentration of the contaminants, the energy of the radiation being emitted, the type of radiation, and the proximity of the contamination to organs of the body. It is important to be clear that the contamination gives rise to the radiation hazard, and the terms "radiation" and "contamination" are not interchangeable. The sources of radioactive pollution can be classified into two groups: natural and man-made. Following an atmospheric nuclear weapon discharge or a nuclear reactor containment breach, the air, soil, people, plants, and animals in the vicinity will become contaminated by nuclear fuel and fission products . A spilled vial of radioactive material like uranyl nitrate may contaminate the floor and any rags used to wipe up the spill. Cases of widespread radioactive contamination include the Bikini Atoll , the Rocky Flats Plant in Colorado, the area near the Fukushima Daiichi nuclear disaster , the area near the Chernobyl disaster , and the area near the Mayak disaster . The sources of radioactive pollution can be natural or man-made. Radioactive contamination can be due to a variety of causes. It may occur due to the release of radioactive gases, liquids or particles. For example, if a radionuclide used in nuclear medicine is spilled (accidentally or, as in the case of the Goiânia accident , through ignorance), the material could be spread by people as they walk around. Radioactive contamination may also be an inevitable result of certain processes, such as the release of radioactive xenon in nuclear fuel reprocessing . In cases that radioactive material cannot be contained, it may be diluted to safe concentrations. For a discussion of environmental contamination by alpha emitters please see actinides in the environment . Nuclear fallout is the distribution of radioactive contamination by the 520 atmospheric nuclear explosions that took place from the 1950s to the 1980s. In nuclear accidents, a measure of the type and amount of radioactivity released, such as from a reactor containment failure, is known as the source term. The United States Nuclear Regulatory Commission defines this as "Types and amounts of radioactive or hazardous material released to the environment following an accident." [ 7 ] Contamination does not include residual radioactive material remaining at a site after the completion of decommissioning . Therefore, radioactive material in sealed and designated containers is not properly referred to as contamination, although the units of measurement might be the same. Containment is the primary way of preventing contamination from being released into the environment or coming into contact with or being ingested by humans. Being within the intended Containment differentiates radioactive material from radioactive contamination . When radioactive materials are concentrated to a detectable level outside a containment, the area affected is generally referred to as "contaminated". There are a large number of techniques for containing radioactive materials so that it does not spread beyond the containment and become contaminated. In the case of liquids, this is by the use of high integrity tanks or containers, usually with a sump system so that leakage can be detected by radiometric or conventional instrumentation. Where the material is likely to become airborne, then extensive use is made of the glovebox , which is a common technique in hazardous laboratory and process operations in many industries. The gloveboxes are kept under slight negative pressure and the vent gas is filtered in high-efficiency filters, which are monitored by radiological instrumentation to ensure they are functioning correctly. A variety of radionuclides occur naturally in the environment. Elements like uranium and thorium , and their decay products , are present in rock and soil. Potassium-40 , a primordial nuclide , makes up a small percentage of all potassium and is present in the human body. Other nuclides, like carbon-14 , which is present in all living organisms, are continuously created by cosmic rays . These levels of radioactivity pose little bit danger but can confuse measurement. A particular problem is encountered with naturally generated radon gas which can affect instruments that are set to detect contamination close to normal background levels and can cause false alarms. Because of this skill is required by the operator of radiological survey equipment to differentiate between background radiation and the radiation which emanates from contamination. Naturally occurring radioactive materials (NORM) can be brought to the surface or concentrated by human activities such as mining, oil and gas extraction, and coal consumption. Radioactive contamination may exist on surfaces or in volumes of material or air, and specialized techniques are used to measure the levels of contamination by detection of the emitted radiation. Contamination monitoring depends entirely upon the correct and appropriate deployment and utilisation of radiation monitoring instruments. Surface contamination may either be fixed or "free". In the case of fixed contamination, the radioactive material cannot by definition be spread, but its radiation is still measurable. In the case of free contamination, there is the hazard of contamination spread to other surfaces such as skin or clothing, or entrainment in the air. A concrete surface contaminated by radioactivity can be shaved to a specific depth, removing the contaminated material for disposal. For occupational workers, controlled areas are established where there may be a contamination hazard. Access to such areas is controlled by a variety of barrier techniques, sometimes involving changes of clothing and footwear as required. The contamination within a controlled area is normally regularly monitored. Radiological protection instrumentation (RPI) plays a key role in monitoring and detecting any potential contamination spread, and combinations of hand held survey instruments and permanently installed area monitors such as Airborne particulate monitors and area gamma monitors are often installed. Detection and measurement of surface contamination of personnel and plant are normally by Geiger counter , scintillation counter or proportional counter . Proportional counters and dual phosphor scintillation counters can discriminate between alpha and beta contamination, but the Geiger counter cannot. Scintillation detectors are generally preferred for hand-held monitoring instruments and are designed with a large detection window to make monitoring of large areas faster. Geiger detectors tend to have small windows, which are more suited to small areas of contamination. The spread of contamination by personnel exiting controlled areas in which nuclear material is used or processed is monitored by specialised installed exit control instruments such as frisk probes, hand contamination monitors and whole body exit monitors. These are used to check that persons exiting controlled areas do not carry contamination on their bodies or clothes. In the United Kingdom , HSE has issued a user guidance note on selecting the correct portable radiation measurement instrument for the application concerned. [ 8 ] This covers all radiation instrument technologies and is a useful comparative guide for selecting the correct technology for the contamination type. The UK NPL publishes a guide on the alarm levels to be used with instruments for checking personnel exiting controlled areas in which contamination may be encountered. [ 9 ] Surface contamination is usually expressed in units of radioactivity per unit of area for alpha or beta emitters. For SI , this is becquerels per square meter (or Bq/m 2 ). Other units such as picoCuries per 100 cm 2 or disintegrations per minute per square centimeter (1 dpm/cm 2 = 167 Bq/m 2 ) may be used. The air can be contaminated with radioactive isotopes in particulate form, which poses a particular inhalation hazard. Respirators with suitable air filters or completely self-contained suits with their own air supply can mitigate these dangers. Airborne contamination is measured by specialist radiological instruments that continuously pump the sampled air through a filter. Airborne particles accumulate on the filter and can be measured in a number of ways: Commonly a semiconductor radiation detection sensor is used that can also provide spectrographic information on the contamination being collected. A particular problem with airborne contamination monitors designed to detect alpha particles is that naturally occurring radon can be quite prevalent and may appear as contamination when low contamination levels are being sought. Modern instruments consequently have "radon compensation" to overcome this effect. Radioactive contamination can enter the body through ingestion , inhalation , absorption , or injection . This will result in a committed dose . For this reason, it is important to use personal protective equipment when working with radioactive materials. Radioactive contamination may also be ingested as the result of eating contaminated plants and animals or drinking contaminated water or milk from exposed animals. Following a major contamination incident, all potential pathways of internal exposure should be considered. Successfully used on Harold McCluskey , chelation therapy and other treatments exist for internal radionuclide contamination. [ 10 ] Cleaning up contamination results in radioactive waste unless the radioactive material can be returned to commercial use by reprocessing . In some cases of large areas of contamination, the contamination may be mitigated by burying and covering the contaminated substances with concrete, soil, or rock to prevent further spread of the contamination to the environment. If a person's body is contaminated by ingestion or by injury and standard cleaning cannot reduce the contamination further, then the person may be permanently contaminated. [ citation needed ] Contamination control products have been used by the U.S. Department of Energy (DOE) and the commercial nuclear industry for decades to minimize contamination on radioactive equipment and surfaces and fix contamination in place. "Contamination control products" is a broad term that includes fixatives, strippable coatings, and decontamination gels . A fixative product functions as a permanent coating to stabilize residual loose/transferable radioactive contamination by fixing it in place; this aids in preventing the spread of contamination and reduces the possibility of the contamination becoming airborne, reducing workforce exposure and facilitating future deactivation and decommissioning (D&D) activities. Strippable coating products are loosely adhered to paint-like films and are used for their decontamination abilities. They are applied to surfaces with loose/transferable radioactive contamination and then, once dried, are peeled off, which removes the loose/transferable contamination along with the product. The residual radioactive contamination on the surface is significantly reduced once the strippable coating is removed. Modern strippable coatings show high decontamination efficiency and can rival traditional mechanical and chemical decontamination methods. Decontamination gels work in much the same way as other strippable coatings. The results obtained through the use of contamination control products are variable and depend on the type of substrate, the selected contamination control product, the contaminants, and the environmental conditions (e.g., temperature, humidity, etc.). [2] Some of the largest areas committed to be decontaminated are in the Fukushima Prefecture , Japan. The national government is under pressure to clean up radioactivity due to the Fukushima nuclear accident of March 2011 from as much land as possible so that some of the 110,000 displaced people can return. Stripping out the key radioisotope threatening health ( caesium-137 ) from low-level waste could also dramatically decrease the volume of waste requiring special disposal. A goal is to find techniques that might be able to strip out 80 to 95% of the caesium from contaminated soil and other materials, efficiently and without destroying the organic content in the soil. One being investigated is termed hydrothermal blasting. The caesium is broken away from soil particles and then precipitated with ferric ferricyanide ( Prussian blue ). It would be the only component of the waste requiring special burial sites. [ 11 ] The aim is to get annual exposure from the contaminated environment down to one millisievert (mSv) above background. The most contaminated area where radiation doses are greater than 50 mSv/year must remain off-limits, but some areas that are currently less than 5 mSv/year may be decontaminated allowing 22,000 residents to return. To help protect people living in geographical areas which have been radioactively contaminated, the International Commission on Radiological Protection has published a guide: "Publication 111 – Application of the Commission's Recommendations to the Protection of People Living in Long-term Contaminated Areas after a Nuclear Accident or a Radiation Emergency". [ 12 ] The hazards to people and the environment from radioactive contamination depend on the nature of the radioactive contaminant, the level of contamination, and the extent of the spread of contamination. Low levels of radioactive contamination pose little risk, but can still be detected by radiation instrumentation. [ citation needed ] If a survey or map is made of a contaminated area, random sampling locations may be labeled with their activity in becquerels or curies on contact. Low levels may be reported in counts per minute using a scintillation counter . In the case of low-level contamination by isotopes with a short half-life, the best course of action may be to simply allow the material to naturally decay . Longer-lived isotopes should be cleaned up and properly disposed of because even a very low level of radiation can be life-threatening when in long exposure to it. Facilities and physical locations that are deemed to be contaminated may be cordoned off by a health physicist and labeled "Contaminated area." Persons coming near such an area would typically require anti-contamination clothing ("anti-Cs"). High levels of contamination may pose major risks to people and the environment. People can be exposed to potentially lethal radiation levels, both externally and internally, from the spread of contamination following an accident (or a deliberate initiation ) involving large quantities of radioactive material. The biological effects of external exposure to radioactive contamination are generally the same as those from an external radiation source not involving radioactive materials, such as x-ray machines, and are dependent on the absorbed dose . When radioactive contamination is being measured or mapped in situ , any location that appears to be a point source of radiation is likely to be heavily contaminated. A highly contaminated location is colloquially referred to as a "hot spot." On a map of a contaminated place, hot spots may be labeled with their "on contact" dose rate in mSv/h. In a contaminated facility, hot spots may be marked with a sign, shielded with bags of lead shot , or cordoned off with warning tape containing the radioactive trefoil symbol . The hazard from contamination is the emission of ionizing radiation. The principal radiations which will be encountered are alpha, beta and gamma, but these have quite different characteristics. They have widely differing penetrating powers and radiation effects, and the accompanying diagram shows the penetration of these radiations in simple terms. For an understanding of the different ionising effects of these radiations and the weighting factors applied, see the article on absorbed dose . Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to radiation or radioactive substances, and the interpretation of the results. The methodological and technical details of the design and operation of environmental radiation monitoring programmes and systems for different radionuclides, environmental media and types of facility are given in IAEA Safety Standards Series No. RS–G-1.8 [ 13 ] and in IAEA Safety Reports Series No. 64. [ 14 ] Radioactive contamination by definition emits ionizing radiation, which can irradiate the human body from an external or internal origin. This is due to radiation from contamination located outside the human body. The source can be in the vicinity of the body or can be on the skin surface. The level of health risk is dependent on duration and the type and strength of irradiation. Penetrating radiation such as gamma rays, X-rays, neutrons or beta particles pose the greatest risk from an external source. Low penetrating radiation such as alpha particles have a low external risk due to the shielding effect of the top layers of skin. See the article on sievert for more information on how this is calculated. Radioactive contamination can be ingested into the human body if it is airborne or is taken in as contamination of food or drink, and will irradiate the body internally. The art and science of assessing internally generated radiation dose is Internal dosimetry . The biological effects of ingested radionuclides depend greatly on the activity, the biodistribution, and the removal rates of the radionuclide, which in turn depends on its chemical form, the particle size, and route of entry. Effects may also depend on the chemical toxicity of the deposited material, independent of its radioactivity. Some radionuclides may be generally distributed throughout the body and rapidly removed, as is the case with tritiated water . Some organs concentrate certain elements and hence radionuclide variants of those elements. This action may lead to much lower removal rates. For instance, the thyroid gland takes up a large percentage of any iodine that enters the body. Large quantities of inhaled or ingested radioactive iodine may impair or destroy the thyroid, while other tissues are affected to a lesser extent. Radioactive iodine-131 is a common fission product ; it was a major component of the radioactivity released from the Chernobyl disaster , leading to nine fatal cases of pediatric thyroid cancer and hypothyroidism . On the other hand, radioactive iodine is used in the diagnosis and treatment of many diseases of the thyroid precisely because of the thyroid's selective uptake of iodine. The radiation risk proposed by the International Commission on Radiological Protection (ICRP) predicts that an effective dose of one sievert (100 rem) carries a 5.5% chance of developing cancer. Such a risk is the sum of both internal and external radiation doses. [ 15 ] The ICRP states "Radionuclides incorporated in the human body irradiate the tissues over time periods determined by their physical half-life and their biological retention within the body. Thus they may give rise to doses to body tissues for many months or years after the intake. The need to regulate exposures to radionuclides and the accumulation of radiation dose over extended periods of time has led to the definition of committed dose quantities". [ 16 ] The ICRP further states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients". [ 17 ] The ICRP defines two dose quantities for individual committed dose: Committed equivalent dose , H T ( t ) is the time integral of the equivalent dose rate in a particular tissue or organ that will be received by an individual following intake of radioactive material into the body by a Reference Person, where t is the integration time in years. [ 18 ] This refers specifically to the dose in a specific tissue or organ, in a similar way to external equivalent dose. Committed effective dose, E( t ) is the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors W T , where t is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children. [ 18 ] This refers specifically to the dose to the whole body, in a similar way to external effective dose. A 2015 report in Lancet explained that serious impacts of nuclear accidents were often not directly attributable to radiation exposure, but rather social and psychological effects. [ 19 ] The consequences of low-level radiation are often more psychological than radiological. Because damage from very low-level radiation cannot be detected, people exposed to it are left in anguished uncertainty about what will happen to them. Many believe they have been fundamentally contaminated for life and may refuse to have children for fear of birth defects . They may be shunned by others in their community who fear a sort of mysterious contagion. [ 20 ] Forced evacuation from a radiological or nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, even suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine. A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date". [ 20 ] Frank N. von Hippel , a U.S. scientist, commented on 2011 Fukushima nuclear disaster , saying that "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas". [ 21 ] Evacuation and long-term displacement of affected populations create problems for many people, especially the elderly and hospital patients. [ 19 ] Such great psychological danger does not accompany other materials that put people at risk of cancer and other deadly illness. Visceral fear is not widely aroused by, for example, the daily emissions from coal burning, although, as a National Academy of Sciences study found, this causes 10,000 premature deaths a year in the US population of 317,413,000 . Medical errors leading to death in U.S. hospitals are estimated to be between 44,000 and 98,000. It is "only nuclear radiation that bears a huge psychological burden – for it carries a unique historical legacy". [ 20 ] Nuclear technology portal
https://en.wikipedia.org/wiki/Radioactive_contamination
The law of radioactive displacements , also known as Fajans's and Soddy's law , in radiochemistry and nuclear physics , is a rule governing the transmutation of elements during radioactive decay . It is named after Frederick Soddy and Kazimierz Fajans , who independently arrived at it at about the same time in 1913. [ 1 ] [ 2 ] The law describes which chemical element and isotope is created during the particular type of radioactive decay:
https://en.wikipedia.org/wiki/Radioactive_displacement_law_of_Fajans_and_Soddy
Radioactive scrap metal is created when radioactive material enters the metal recycling process and contaminates scrap metal . A "lost source accident" [ 1 ] [ 2 ] occurs when a radioactive object is lost or stolen . Such objects may appear in the scrap metal industry if people mistake them for harmless bits of metal. [ 3 ] The International Atomic Energy Agency has provided guides for scrap metal collectors on what a sealed source might look like. [ 4 ] [ 5 ] The best known example of this type of event is the Goiânia accident , in Brazil . While some lost-source accidents have not involved the scrap metal industry, they are good examples of the likely scale and scope of a lost-source accident. For example, the Red Army left sources behind in Didi Lilo , Georgia . [ 6 ] Another case occurred at Yanango where an 192 Ir radiography source was lost and at Gilan , Iran a radiography source harmed a welder . [ 7 ] Radioactive sources have a wide range of uses in medicine and industry, and it is common for the design (and nature) of a source to be tailored to the specific application. Hence, it is impossible to state with confidence what the "typical" source looks like or contains. For instance, antistatic devices include beta and alpha emitters: polonium containing devices have been used to eliminate static electricity in such devices as paint spraying equipment. [ 8 ] An overview of the gamma sources used for radiography can be seen at Radiographic equipment , and it is reasonable to consider this to be a good overview of small to moderate gamma sources. The cleanup operation for the Goiânia accident [ 20 ] was difficult both because the source containment had been opened, and the radioactive material was water-soluble. In 1983, a different incident in Mexico wherein cobalt-60 was spilled in an otherwise similar exposure led to a very different pattern of contamination, since the cobalt in such a source is normally in the form of cobalt metal alloyed with some nickel to improve the mechanical properties of the radioactive metal. If such a source is abused, then the cobalt metal fragments do not tend to dissolve in water or become very mobile. If a cobalt or iridium source is lost at a ferrous metal scrapyard then it is often the case that the source will enter a furnace, the radioactive metal will melt and contaminate the steel from this furnace. In Mexico, some buildings have been demolished because of the level of cobalt-60 in the steel used to make them. Also, some of the steel which was rendered radioactive in the Mexican event was used to make legs for 1400 tables. [ 15 ] In the case of some high-value scrap metals it is possible to decontaminate the material, but this is best done long before the metal goes to a scrap yard. [ 21 ] [ 22 ] In the case of a caesium source being melted in an electric arc furnace used for steel scrap, it is more likely that the caesium will contaminate the fly ash or dust from the furnace, while radium is likely to stay in the ash or slag . The United States Environmental Protection Agency provides data about the fate of different contaminating elements in a scrap furnace. [ 23 ] Four different fates for the element exist: the element can stay in the metal (as with cobalt and ruthenium ); the element can enter the slag (as in lanthanides , actinides and radium ); the element can enter the furnace dust or fly ash (as with caesium ), which accounts for around 5%; or the element can leave the furnace and pass through the baghouse to enter the air (as with iodine ). It is normal to place silicon , aluminium scrap and flux in a furnace. This is heated to form molten aluminium. From the furnace three main streams are obtained, metal product, dross (metal oxides and halides which are skimmed off the molten metal product) and off gases which go to the baghouse . The cooled waste gasses are then allowed out into the environment. It is normal that good-quality scrap copper, such as that from a nuclear plant , is refined in one furnace before being refined further in an electrochemical process. The furnace generates impure metal, slag , dust and gases. The dust accumulates in a baghouse, while the gases are vented to the atmosphere. The impure metal from the furnace may be further refined in an electrochemical process. If the copper refinery includes an electrochemical process after the furnace, then unwanted elements are removed from the impure metal and deposited as anode slime .
https://en.wikipedia.org/wiki/Radioactive_scrap_metal
A radioactive source is a known quantity of a radionuclide which emits ionizing radiation , typically one or more of the radiation types gamma rays , alpha particles , beta particles , and neutron radiation . Sources can be used for irradiation , where the radiation performs a significant ionising function on a target material, or as a radiation metrology source, which is used for the calibration of radiometric process and radiation protection instrumentation. They are also used for industrial process measurements, such as thickness gauging in the paper and steel industries. Sources can be sealed in a container (highly penetrating radiation) or deposited on a surface (weakly penetrating radiation), or they can be in a fluid. As an irradiation source they are used in medicine for radiation therapy and in industry for such as industrial radiography , food irradiation , sterilization , vermin disinfestation, and irradiation crosslinking of PVC. Radionuclides are chosen according to the type and character of the radiation they emit, intensity of emission, and the half-life of their decay. Common source radionuclides include cobalt-60 , [ 1 ] iridium-192 , [ 2 ] and strontium-90 . [ 3 ] The SI measurement quantity of source activity is the Becquerel , though the historical unit Curies is still in partial use, such as in the US, despite their NIST strongly advising the use of the SI unit. [ 4 ] The SI unit for health purposes is mandatory in the EU . An irradiation source typically lasts for between 5 and 15 years before its activity drops below useful levels. [ 5 ] However sources with long half-life radionuclides when utilised as calibration sources can be used for much longer. Many radioactive sources are sealed, meaning they are permanently either completely contained in a capsule or firmly bonded solid to a surface. Capsules are usually made of stainless steel , titanium , platinum or another inert metal . [ 5 ] The use of sealed sources removes almost all risk of dispersion of radioactive material into the environment due to mishandling, [ 6 ] but the container is not intended to attenuate radiation, so further shielding is required for radiation protection. [ 7 ] Sealed sources are used in almost all applications where the source does not need to be chemically or physically included in a liquid or gas. Source: [ 9 ] Sealed sources are categorised by the IAEA according to their activity in relation to a minimum dangerous source (where a dangerous source is one that could cause significant injury to humans). The ratio used is A/D, where A is the activity of the source and D is the minimum dangerous activity. Note that sources with sufficiently low radioactive output (such as those used in Smoke detectors ) as to not cause harm to humans are not categorised. Calibration sources are used primarily for the calibration of radiometric instrumentation, which is used on process monitoring or in radiological protection. Capsule sources, where the radiation effectively emits from a point, are used for beta, gamma and X-ray instrument calibration. High level sources are normally used in a calibration cell: a room with thick walls to protect the operator and the provision of remote operation of the source exposure. The plate source is in common use for the calibration of radioactive contamination instruments. This has a known amount of radioactive material fixed to its surface, such as an alpha and/or beta emitter, to allow the calibration of large area radiation detectors used for contamination surveys and personnel monitoring. Such measurements are typically counts per unit time received by the detector, such as counts per minute or counts per second. Unlike the capsule source, the plate source emitting material must be on the surface to prevent attenuation by a container or self-shielding due to the material itself. This is particularly important with alpha particles which are easily stopped by a small mass. The Bragg curve shows the attenuation effect in free air. Unsealed sources are sources that are not in a permanently sealed container, and are used extensively for medical purposes. [ 10 ] They are used when the source needs to be dissolved in a liquid for injection into a patient or ingestion by the patient. Unsealed sources are also used in industry in a similar manner for leak detection as a Radioactive tracer . Disposal of expired radioactive sources presents similar challenges to the disposal of other nuclear waste , although to a lesser degree. Spent low level sources will sometimes be sufficiently inactive that they are suitable for disposal via normal waste disposal methods — usually landfill. Other disposal methods are similar to those for higher-level radioactive waste, using various depths of borehole depending on the activity of the waste. [ 5 ] A notorious incident of neglect in disposing of a high level source was the Goiânia accident , which resulted in several fatalities. The Tammiku radioactive material theft involved the accidental theft of caesium-137 material in Tammiku, Estonia , in 1994 .
https://en.wikipedia.org/wiki/Radioactive_source
A radioactive tracer , radiotracer , or radioactive label is a synthetic derivative of a natural compound in which one or more atoms have been replaced by a radionuclide (a radioactive atom). By virtue of its radioactive decay , it can be used to explore the mechanism of chemical reactions by tracing the path that the radioisotope follows from reactants to products. Radiolabeling or radiotracing is thus the radioactive form of isotopic labeling . In biological contexts, experiments that use radioisotope tracers are sometimes called radioisotope feeding experiments. Radioisotopes of hydrogen , carbon , phosphorus , sulfur , and iodine have been used extensively to trace the path of biochemical reactions . A radioactive tracer can also be used to track the distribution of a substance within a natural system such as a cell or tissue , [ 1 ] or as a flow tracer to track fluid flow . Radioactive tracers are also used to determine the location of fractures created by hydraulic fracturing in natural gas production. [ 2 ] Radioactive tracers form the basis of a variety of imaging systems, such as, PET scans , SPECT scans and technetium scans . Radiocarbon dating uses the naturally occurring carbon-14 isotope as an isotopic label . In radiopharmaceutical sciences some misuse of established scientific terms exist. Therefore an international "Working Group on Nomenclature in Radiopharmaceutical Chemistry and Related Areas" was formed in 2015 by the Society of Radiopharmaceutical Sciences (SRS). Their goal was to clarify terminology and to establish a standardized nomenclature through global consensus, ensuring consistency and accuracy within the discipline. [ 3 ] Isotopes of a chemical element differ only in the mass number. For example, the isotopes of hydrogen can be written as 1 H , 2 H and 3 H , with the mass number superscripted to the left. When the atomic nucleus of an isotope is unstable, compounds containing this isotope are radioactive . Tritium is an example of a radioactive isotope. The principle behind the use of radioactive tracers is that an atom in a chemical compound is replaced by another atom, of the same chemical element. The substituting atom, however, is a radioactive isotope. This process is often called radioactive labeling. The power of the technique is due to the fact that radioactive decay is much more energetic than chemical reactions. Therefore, the radioactive isotope can be present in low concentration and its presence detected by sensitive radiation detectors such as Geiger counters and scintillation counters . George de Hevesy won the 1943 Nobel Prize for Chemistry "for his work on the use of isotopes as tracers in the study of chemical processes". There are two main ways in which radioactive tracers are used The commonly used radioisotopes have short half lives and so do not occur in nature in large amounts. They are produced by nuclear reactions . One of the most important processes is absorption of a neutron by an atomic nucleus, in which the mass number of the element concerned increases by 1 for each neutron absorbed. For example, In this case the atomic mass increases, but the element is unchanged. In other cases the product nucleus is unstable and decays, typically emitting protons, electrons ( beta particle ) or alpha particles . When a nucleus loses a proton the atomic number decreases by 1. For example, Neutron irradiation is performed in a nuclear reactor . The other main method used to synthesize radioisotopes is proton bombardment. The proton are accelerated to high energy either in a cyclotron or a linear accelerator . [ 4 ] Tritium (hydrogen-3) is produced by neutron irradiation of 6 Li : Tritium has a half-life 4500 ± 8 days (approximately 12.32 years) [ 5 ] and it decays by beta decay . The electrons produced have an average energy of 5.7 keV. Because the emitted electrons have relatively low energy, the detection efficiency by scintillation counting is rather low. However, hydrogen atoms are present in all organic compounds, so tritium is frequently used as a tracer in biochemical studies. 11 C decays by positron emission with a half-life of ca. 20 min. 11 C is one of the isotopes often used in positron emission tomography . [ 4 ] 14 C decays by beta decay , with a half-life of 5730 years. It is continuously produced in the upper atmosphere of the earth, so it occurs at a trace level in the environment. However, it is not practical to use naturally-occurring 14 C for tracer studies. Instead it is made by neutron irradiation of the isotope 13 C which occurs naturally in carbon at about the 1.1% level. 14 C has been used extensively to trace the progress of organic molecules through metabolic pathways. [ 6 ] 13 N decays by positron emission with a half-life of 9.97 min. It is produced by the nuclear reaction 13 N is used in positron emission tomography (PET scan). 15 O decays by positron emission with a half-life of 122 seconds. It is used in positron emission tomography. 18 F decays predominantly by β emission, with a half-life of 109.8 min. It is made by proton bombardment of 18 O in a cyclotron or linear particle accelerator . It is an important isotope in the radiopharmaceutical industry. For example, it is used to make labeled fluorodeoxyglucose (FDG) for application in PET scans. [ 4 ] 32 P is made by neutron bombardment of 32 S It decays by beta decay with a half-life of 14.29 days. It is commonly used to study protein phosphorylation by kinases in biochemistry. 33 P is made in relatively low yield by neutron bombardment of 31 P . It is also a beta-emitter, with a half-life of 25.4 days. Though more expensive than 32 P , the emitted electrons are less energetic, permitting better resolution in, for example, DNA sequencing. Both isotopes are useful for labeling nucleotides and other species that contain a phosphate group. 35 S is made by neutron bombardment of 35 Cl It decays by beta-decay with a half-life of 87.51 days. It is used to label the sulfur-containing amino-acids methionine and cysteine . When a sulfur atom replaces an oxygen atom in a phosphate group on a nucleotide a thiophosphate is produced, so 35 S can also be used to trace a phosphate group. 99m Tc is a very versatile radioisotope, and is the most commonly used radioisotope tracer in medicine. It is easy to produce in a technetium-99m generator , by decay of 99 Mo . The molybdenum isotope has a half-life of approximately 66 hours (2.75 days), so the generator has a useful life of about two weeks. Most commercial 99m Tc generators use column chromatography , in which 99 Mo in the form of molybdate, MoO 4 2− is adsorbed onto acid alumina (Al 2 O 3 ). When the 99 Mo decays it forms pertechnetate TcO 4 − , which because of its single charge is less tightly bound to the alumina. Pulling normal saline solution through the column of immobilized 99 Mo elutes the soluble 99m Tc, resulting in a saline solution containing the 99m Tc as the dissolved sodium salt of the pertechnetate. The pertechnetate is treated with a reducing agent such as Sn 2+ and a ligand . Different ligands form coordination complexes which give the technetium enhanced affinity for particular sites in the human body. 99m Tc decays by gamma emission, with a half-life: 6.01 hours. The short half-life ensures that the body-concentration of the radioisotope falls effectively to zero in a few days. 123 I is produced by proton irradiation of 124 Xe . The caesium isotope produced is unstable and decays to 123 I. The isotope is usually supplied as the iodide and hypoiodate in dilute sodium hydroxide solution, at high isotopic purity. [ 7 ] 123 I has also been produced at Oak Ridge National Laboratories by proton bombardment of 123 Te . [ 8 ] 123 I decays by electron capture with a half-life of 13.22 hours. The emitted 159 keV gamma ray is used in single-photon emission computed tomography (SPECT). A 127 keV gamma ray is also emitted. 125 I is frequently used in radioimmunoassays because of its relatively long half-life (59 days) and ability to be detected with high sensitivity by gamma counters. [ 9 ] 129 I is present in the environment as a result of the testing of nuclear weapons in the atmosphere. It was also produced in the Chernobyl and Fukushima disasters. 129 I decays with a half-life of 15.7 million years, with low-energy beta and gamma emissions. It is not used as a tracer, though its presence in living organisms, including human beings, can be characterized by measurement of the gamma rays. Many other isotopes have been used in specialized radiopharmacological studies. The most widely used is 67 Ga for gallium scans . 67 Ga is used because, like 99m Tc, it is a gamma-ray emitter and various ligands can be attached to the Ga 3+ ion, forming a coordination complex which may have selective affinity for particular sites in the human body. An extensive list of radioactive tracers used in hydraulic fracturing can be found below. In metabolism research, tritium and 14 C -labeled glucose are commonly used in glucose clamps to measure rates of glucose uptake , fatty acid synthesis , and other metabolic processes. [ 10 ] While radioactive tracers are sometimes still used in human studies, stable isotope tracers such as 13 C are more commonly used in current human clamp studies. Radioactive tracers are also used to study lipoprotein metabolism in humans and experimental animals. [ 11 ] In medicine , tracers are applied in a number of tests, such as 99m Tc in autoradiography and nuclear medicine , including single-photon emission computed tomography (SPECT), positron emission tomography (PET) and scintigraphy . The urea breath test for helicobacter pylori commonly used a dose of 14 C labeled urea to detect h. pylori infection. If the labeled urea was metabolized by h. pylori in the stomach, the patient's breath would contain labeled carbon dioxide. In recent years, the use of substances enriched in the non-radioactive isotope 13 C has become the preferred method, avoiding patient exposure to radioactivity. [ 12 ] In hydraulic fracturing , radioactive tracer isotopes are injected with hydraulic fracturing fluid to determine the injection profile and location of created fractures. [ 2 ] Tracers with different half-lives are used for each stage of hydraulic fracturing. In the United States amounts per injection of radionuclide are listed in the US Nuclear Regulatory Commission (NRC) guidelines. [ 13 ] According to the NRC, some of the most commonly used tracers include antimony-124 , bromine-82 , iodine-125 , iodine-131 , iridium-192 , and scandium-46 . [ 13 ] A 2003 publication by the International Atomic Energy Agency confirms the frequent use of most of the tracers above, and says that manganese-56 , sodium-24 , technetium-99m , silver-110m , argon-41 , and xenon-133 are also used extensively because they are easily identified and measured. [ 14 ]
https://en.wikipedia.org/wiki/Radioactive_tracer
Radioactive waste is a type of hazardous waste that contains radioactive material . It is a result of many activities, including nuclear medicine , nuclear research , nuclear power generation, nuclear decommissioning , rare-earth mining, and nuclear weapons reprocessing. [ 1 ] The storage and disposal of radioactive waste is regulated by government agencies in order to protect human health and the environment. Radioactive waste is broadly classified into 3 categories: low-level waste (LLW), such as paper, rags, tools, clothing, which contain small amounts of mostly short-lived radioactivity; intermediate-level waste (ILW), which contains higher amounts of radioactivity and requires some shielding; and high-level waste (HLW), which is highly radioactive and hot due to decay heat, thus requiring cooling and shielding. Spent nuclear fuel can be processed in nuclear reprocessing plants. One third of the total amount have already been reprocessed. With nuclear reprocessing 96% of the spent fuel can be recycled back into uranium-based and mixed-oxide (MOX) fuels . [ 2 ] The residual 4% is minor actinides and fission products , the latter of which are a mixture of stable and quickly decaying (most likely already having decayed in the spent fuel pool ) elements, medium lived fission products such as strontium-90 and caesium-137 and finally seven long-lived fission products with half lives in the hundreds of thousands to millions of years. The minor actinides meanwhile are heavy elements other than uranium and plutonium which are created by neutron capture . Their half lives range from years to millions of years and as alpha emitters they are particularly radiotoxic. While there are proposed – and to a much lesser extent current – uses of all those elements, commercial scale reprocessing using the PUREX -process disposes of them as waste together with the fission products. The waste is subsequently converted into a glass-like ceramic for storage in a deep geological repository . The time radioactive waste must be stored depends on the type of waste and radioactive isotopes it contains. Short-term approaches to radioactive waste storage have been segregation and storage on the surface or near-surface of the earth. Burial in a deep geological repository is a favored solution for long-term storage of high-level waste, while re-use and transmutation are favored solutions for reducing the HLW inventory. Boundaries to recycling of spent nuclear fuel are regulatory and economic as well as the issue of radioactive contamination if chemical separation processes cannot achieve a very high purity. Furthermore, elements may be present in both useful and troublesome isotopes, which would require costly and energy intensive isotope separation for their use – a currently uneconomic prospect. A summary of the amounts of radioactive waste and management approaches for most developed countries are presented and reviewed periodically as part of a joint convention of the International Atomic Energy Agency (IAEA). [ 3 ] A quantity of radioactive waste typically consists of a number of radionuclides , which are unstable isotopes of elements that undergo decay and thereby emit ionizing radiation , which is harmful to humans and the environment. Different isotopes emit different types and levels of radiation, which last for different periods of time. The radioactivity of all radioactive waste weakens with time. All radionuclides contained in the waste have a half-life —the time it takes for half of the atoms to decay into another nuclide . Eventually, all radioactive waste decays into non-radioactive elements (i.e., stable nuclides ). Since radioactive decay follows the half-life rule, the rate of decay is inversely proportional to the duration of decay. In other words, the radiation from a long-lived isotope like iodine-129 will be much less intense than that of a short-lived isotope like iodine-131 . [ 4 ] The two tables show some of the major radioisotopes, their half-lives, and their radiation yield as a proportion of the yield of fission of uranium-235. The energy and the type of the ionizing radiation emitted by a radioactive substance are also important factors in determining its threat to humans. [ 5 ] The chemical properties of the radioactive element will determine how mobile the substance is and how likely it is to spread into the environment and contaminate humans. [ 6 ] This is further complicated by the fact that many radioisotopes do not decay immediately to a stable state but rather to radioactive decay products within a decay chain before ultimately reaching a stable state. Exposure to radioactive waste may cause health impacts due to ionizing radiation exposure. In humans, a dose of 1 sievert carries a 5.5% risk of developing cancer, [ 7 ] and regulatory agencies assume the risk is linearly proportional to dose even for low doses. Ionizing radiation can cause deletions in chromosomes. [ 8 ] If a developing organism such as a fetus is irradiated, it is possible a birth defect may be induced, but it is unlikely this defect will be in a gamete or a gamete-forming cell . The incidence of radiation-induced mutations in humans is small, as in most mammals, because of natural cellular-repair mechanisms, many just now coming to light. These mechanisms range from DNA, mRNA and protein repair, to internal lysosomic digestion of defective proteins, and even induced cell suicide—apoptosis [ 9 ] Depending on the decay mode and the pharmacokinetics of an element (how the body processes it and how quickly), the threat due to exposure to a given activity of a radioisotope will differ. For instance, iodine-131 is a short-lived beta and gamma emitter, but because it concentrates in the thyroid gland, it is more able to cause injury than caesium -137 which, being water soluble , is rapidly excreted through urine. In a similar way, the alpha emitting actinides and radium are considered very harmful as they tend to have long biological half-lives and their radiation has a high relative biological effectiveness , making it far more damaging to tissues per amount of energy deposited. Because of such differences, the rules determining biological injury differ widely according to the radioisotope, time of exposure, and sometimes also the nature of the chemical compound which contains the radioisotope. No fission products have a half-life in the range of 100 a–210 ka ... ... nor beyond 15.7 Ma [ 14 ] Radioactive waste comes from a number of sources. In countries with nuclear power plants, nuclear armament, or nuclear fuel treatment plants, the majority of waste originates from the nuclear fuel cycle and nuclear weapons reprocessing. Other sources include medical and industrial wastes, as well as naturally occurring radioactive materials (NORM) that can be concentrated as a result of the processing or consumption of coal, oil, and gas, and some minerals, as discussed below. Waste from the front end of the nuclear fuel cycle is usually alpha-emitting waste from the extraction of uranium. It often contains radium and its decay products. Uranium dioxide (UO 2 ) concentrate from mining is a thousand or so times as radioactive as the granite used in buildings. It is refined from yellowcake (U 3 O 8 ), then converted to uranium hexafluoride gas (UF 6 ). As a gas, it undergoes enrichment to increase the U-235 content from 0.7% to about 4.4% (LEU). It is then turned into a hard ceramic oxide (UO 2 ) for assembly as reactor fuel elements. [ 15 ] The main by-product of enrichment is depleted uranium (DU), principally the U-238 isotope, with a U-235 content of ~0.3%. It is stored, either as UF 6 or as U 3 O 8 . Some is used in applications where its extremely high density makes it valuable such as anti-tank shells , and on at least one occasion even a sailboat keel . [ 16 ] It is also used with plutonium for making mixed oxide fuel (MOX) and to dilute, or downblend , highly enriched uranium from weapons stockpiles which is now being redirected to become reactor fuel. The back-end of the nuclear fuel cycle, mostly spent fuel rods , contains fission products that emit beta and gamma radiation, and actinides that emit alpha particles , such as uranium-234 (half-life 245 thousand years), neptunium-237 (2.144 million years), plutonium-238 (87.7 years) and americium-241 (432 years), and even sometimes some neutron emitters such as californium (half-life of 898 years for californium-251). These isotopes are formed in nuclear reactors . It is important to distinguish the processing of uranium to make fuel from the reprocessing of used fuel. Used fuel contains the highly radioactive products of fission (see high-level waste below). Many of these are neutron absorbers, called neutron poisons in this context. These eventually build up to a level where they absorb so many neutrons that the chain reaction stops, even with the control rods completely removed from a reactor. At that point, the fuel has to be replaced in the reactor with fresh fuel, even though there is still a substantial quantity of uranium-235 and plutonium present. In the United States, this used fuel is usually "stored", while in other countries such as Russia, the United Kingdom, France, Japan, and India, the fuel is reprocessed to remove the fission products, and the fuel can then be re-used. [ 17 ] The fission products removed from the fuel are a concentrated form of high-level waste as are the chemicals used in the process. While most countries reprocess the fuel carrying out single plutonium cycles, India is planning multiple plutonium recycling schemes [ 18 ] and Russia pursues closed cycle. [ 19 ] The use of different fuels in nuclear reactors results in different spent nuclear fuel (SNF) composition, with varying activity curves. The most abundant material being U-238 with other uranium isotopes, other actinides, fission products and activation products. [ 20 ] Long-lived radioactive waste from the back end of the fuel cycle is especially relevant when designing a complete waste management plan for SNF. When looking at long-term radioactive decay, the actinides in the SNF have a significant influence due to their characteristically long half-lives. Depending on what a nuclear reactor is fueled with, the actinide composition in the SNF will be different. An example of this effect is the use of nuclear fuels with thorium . Th-232 is a fertile material that can undergo a neutron capture reaction and two beta minus decays, resulting in the production of fissile U-233 . The SNF of a cycle with thorium will contain U-233. Its radioactive decay will strongly influence the long-term activity curve of the SNF for around a million years. A comparison of the activity associated to U-233 for three different SNF types can be seen in the figure on the top right. The burnt fuels are thorium with reactor-grade plutonium (RGPu), thorium with weapons-grade plutonium (WGPu), and Mixed oxide fuel (MOX, no thorium). For RGPu and WGPu, the initial amount of U-233 and its decay for around a million years can be seen. This has an effect on the total activity curve of the three fuel types. The initial absence of U-233 and its daughter products in the MOX fuel results in a lower activity in region 3 of the figure at the bottom right, whereas for RGPu and WGPu the curve is maintained higher due to the presence of U-233 that has not fully decayed. Nuclear reprocessing can remove the actinides from the spent fuel so they can be used or destroyed (see Long-lived fission product § Actinides ). Since uranium and plutonium are nuclear weapons materials, there are proliferation concerns. Ordinarily (in spent nuclear fuel), plutonium is reactor-grade plutonium . In addition to plutonium-239 , which is highly suitable for building nuclear weapons, it contains large amounts of undesirable contaminants: plutonium-240 , plutonium-241 , and plutonium-238 . These isotopes are extremely difficult to separate, and more cost-effective ways of obtaining fissile material exist (e.g., uranium enrichment or dedicated plutonium production reactors). [ 21 ] High-level waste is full of highly radioactive fission products , most of which are relatively short-lived. This is a concern since if the waste is stored, perhaps in deep geological storage, over many years the fission products decay, decreasing the radioactivity of the waste and making the plutonium easier to access. The undesirable contaminant Pu-240 decays faster than the Pu-239, and thus the quality of the bomb material increases with time (although its quantity decreases during that time as well). Thus, some have argued, as time passes, these deep storage areas have the potential to become "plutonium mines", from which material for nuclear weapons can be acquired with relatively little difficulty. Critics of the latter idea have pointed out the difficulty of recovering useful material from sealed deep storage areas makes other methods preferable. Specifically, high radioactivity and heat (80 °C in surrounding rock) greatly increase the difficulty of mining a storage area, and the enrichment methods required have high capital costs. [ 22 ] Pu-239 decays to U-235 which is suitable for weapons and which has a very long half-life (roughly 10 9 years). Thus plutonium may decay and leave uranium-235. However, modern reactors are only moderately enriched with U-235 relative to U-238, so the U-238 continues to serve as a denaturation agent for any U-235 produced by plutonium decay. One solution to this problem is to recycle the plutonium and use it as a fuel e.g. in fast reactors . In pyrometallurgical fast reactors , the separated plutonium and uranium are contaminated by actinides and cannot be used for nuclear weapons. Waste from nuclear weapons decommissioning is unlikely to contain much beta or gamma activity other than tritium and americium . It is more likely to contain alpha-emitting actinides such as Pu-239 which is a fissile material used in nuclear bombs, plus some material with much higher specific activities, such as Pu-238 or Po. In the past the neutron trigger for an atomic bomb tended to be beryllium and a high activity alpha emitter such as polonium ; an alternative to polonium is Pu-238 . For reasons of national security, details of the design of modern nuclear bombs are normally not released to the open literature. Some designs might contain a radioisotope thermoelectric generator using Pu-238 to provide a long-lasting source of electrical power for the electronics in the device. It is likely that the fissile material of an old nuclear bomb, which is due for refitting, will contain decay products of the plutonium isotopes used in it. These are likely to include U-236 from Pu-240 impurities plus some U-235 from decay of the Pu-239; due to the relatively long half-life of these Pu isotopes, these wastes from radioactive decay of bomb core material would be very small, and in any case, far less dangerous (even in terms of simple radioactivity) than the Pu-239 itself. The beta decay of Pu-241 forms Am-241 ; the in-growth of americium is likely to be a greater problem than the decay of Pu-239 and Pu-240 as the americium is a gamma emitter (increasing external-exposure to workers) and is an alpha emitter which can cause the generation of heat . The plutonium could be separated from the americium by several different processes; these would include pyrochemical processes and aqueous/organic solvent extraction . A truncated PUREX type extraction process would be one possible method of making the separation. Naturally occurring uranium is not fissile because it contains 99.3% of U-238 and only 0.7% of U-235. Due to historic activities typically related to the radium industry, uranium mining, and military programs, numerous sites contain or are contaminated with radioactivity. In the United States alone, the Department of Energy (DOE) states there are "millions of gallons of radioactive waste" as well as "thousands of tons of spent nuclear fuel and material" and also "huge quantities of contaminated soil and water." [ 23 ] Despite copious quantities of waste, in 2007, the DOE stated a goal of cleaning all presently contaminated sites successfully by 2025. [ 23 ] The Fernald , Ohio site for example had "31 million pounds of uranium product", "2.5 billion pounds of waste", "2.75 million cubic yards of contaminated soil and debris", and a "223 acre portion of the underlying Great Miami Aquifer had uranium levels above drinking standards." [ 23 ] The United States has at least 108 sites designated as areas that are contaminated and unusable, sometimes many thousands of acres. [ 23 ] [ 24 ] The DOE wishes to clean or mitigate many or all by 2025, using the recently developed method of geomelting , [ citation needed ] however the task can be difficult and it acknowledges that some may never be completely remediated. In just one of these 108 larger designations, Oak Ridge National Laboratory (ORNL), there were for example at least "167 known contaminant release sites" in one of the three subdivisions of the 37,000-acre (150 km 2 ) site. [ 23 ] Some of the U.S. sites were smaller in nature, however, cleanup issues were simpler to address, and the DOE has successfully completed cleanup, or at least closure, of several sites. [ 23 ] Radioactive medical waste tends to contain beta particle and gamma ray emitters. It can be divided into two main classes. In diagnostic nuclear medicine a number of short-lived gamma emitters such as technetium-99m are used. Many of these can be disposed of by leaving it to decay for a short time before disposal as normal waste. Other isotopes used in medicine, with half-lives in parentheses, include: Industrial source waste can contain alpha, beta , neutron or gamma emitters. Gamma emitters are used in radiography while neutron emitting sources are used in a range of applications, such as oil well logging. [ 25 ] Substances containing natural radioactivity are known as NORM (naturally occurring radioactive material). After human processing that exposes or concentrates this natural radioactivity (such as mining bringing coal to the surface or burning it to produce concentrated ash), it becomes technologically enhanced naturally occurring radioactive material (TENORM). [ 27 ] Much of this waste is alpha particle -emitting matter from the decay chains of uranium and thorium. The main source of radiation in the human body is potassium -40 ( 40 K ), typically 17 milligrams in the body at a time and 0.4 milligrams/day intake. [ 28 ] Most rocks, especially granite , have a low level of radioactivity due to the potassium-40, thorium and uranium contained. Usually ranging from 1 millisievert (mSv) to 13 mSv annually depending on location, average radiation exposure from natural radioisotopes is 2.0 mSv per person a year worldwide. [ 29 ] This makes up the majority of typical total dosage (with mean annual exposure from other sources amounting to 0.6 mSv from medical tests averaged over the whole populace, 0.4 mSv from cosmic rays , 0.005 mSv from the legacy of past atmospheric nuclear testing, 0.005 mSv occupational exposure, 0.002 mSv from the Chernobyl disaster , and 0.0002 mSv from the nuclear fuel cycle). [ 29 ] TENORM is not regulated as restrictively as nuclear reactor waste, though there are no significant differences in the radiological risks of these materials. [ 30 ] Coal contains a small amount of radioactive uranium, barium, thorium, and potassium, but, in the case of pure coal, this is significantly less than the average concentration of those elements in the Earth's crust . The surrounding strata, if shale or mudstone, often contain slightly more than average and this may also be reflected in the ash content of 'dirty' coals. [ 26 ] [ 31 ] The more active ash minerals become concentrated in the fly ash precisely because they do not burn well. [ 26 ] The radioactivity of fly ash is about the same as black shale and is less than phosphate rocks, but is more of a concern because a small amount of the fly ash ends up in the atmosphere where it can be inhaled. [ 32 ] According to U.S. National Council on Radiation Protection and Measurements (NCRP) reports, population exposure from 1000-MWe power plants amounts to 490 person-rem/year for coal power plants, 100 times as great as nuclear power plants (4.8 person-rem/year). The exposure from the complete nuclear fuel cycle from mining to waste disposal is 136 person-rem/year; the corresponding value for coal use from mining to waste disposal is "probably unknown". [ 26 ] Residues from the oil and gas industry often contain radium and its decay products. The sulfate scale from an oil well can be radium rich, while the water, oil, and gas from a well often contain radon . The radon decays to form solid radioisotopes which form coatings on the inside of pipework. In an oil processing plant, the area of the plant where propane is processed is often one of the more contaminated areas of the plant as radon has a similar boiling point to propane. [ 33 ] Radioactive elements are an industrial problem in some oil wells where workers operating in direct contact with the crude oil and brine can be exposed to doses having negative health effects. Due to the relatively high concentration of these elements in the brine, its disposal is also a technological challenge. Since the 1980s, in the United States, the brine is however exempt from the dangerous waste regulations and can be disposed of regardless of radioactive or toxic substances content. [ 34 ] Due to natural occurrence of radioactive elements such as thorium and radium in rare-earth ore , mining operations also result in production of waste and mineral deposits that are slightly radioactive. [ 35 ] Classification of radioactive waste varies by country. The IAEA, which publishes the Radioactive Waste Safety Standards (RADWASS), also plays a significant role. [ 36 ] The proportion of various types of waste generated in the UK: [ 37 ] Uranium tailings are waste by-product materials left over from the rough processing of uranium-bearing ore . They are not significantly radioactive. Mill tailings are sometimes referred to as 11(e)2 wastes , from the section of the US Atomic Energy Act of 1946 that defines them. Uranium mill tailings typically also contain chemically hazardous heavy metal such as lead and arsenic . Vast mounds of uranium mill tailings are left at many old mining sites, especially in Colorado , New Mexico , and Utah . Although mill tailings are not very radioactive, they have long half-lives. Mill tailings often contain radium, thorium and trace amounts of uranium. [ 38 ] Low-level waste (LLW) is generated from hospitals and industry, as well as the nuclear fuel cycle . Low-level wastes include paper, rags, tools, clothing, filters, and other materials which contain small amounts of mostly short-lived radioactivity. Materials that originate from any region of an Active Area are commonly designated as LLW as a precautionary measure even if there is only a remote possibility of being contaminated with radioactive materials. Such LLW typically exhibits no higher radioactivity than one would expect from the same material disposed of in a non-active area, such as a normal office block. Example LLW includes wiping rags, mops, medical tubes, laboratory animal carcasses, and more. [ 39 ] LLW makes up 94% of all radioactive waste volume in the UK. Most of it is disposed of in Cumbria , first in landfill style trenches, and now using grouted metal containers that are stacked in concrete vaults. A new site in the north of Scotland is the Dounreay site which is prepared to withstand a 4m tsunami. [ 1 ] [1] Some high-activity LLW requires shielding during handling and transport but most LLW is suitable for shallow land burial. To reduce its volume, it is often compacted or incinerated before disposal. Low-level waste is divided into four classes: class A , class B , class C , and Greater Than Class C ( GTCC ). Intermediate-level waste (ILW) contains higher amounts of radioactivity compared to low-level waste. It generally requires shielding, but not cooling. [ 40 ] Intermediate-level wastes includes resins , chemical sludge and metal nuclear fuel cladding, as well as contaminated materials from reactor decommissioning. It may be solidified in concrete or bitumen or mixed with silica sand and vitrified for disposal. As a general rule, short-lived waste (mainly non-fuel materials from reactors) is buried in shallow repositories, while long-lived waste (from fuel and fuel reprocessing) is deposited in geological repository. Regulations in the United States do not define this category of waste; the term is used in Europe and elsewhere. ILW makes up 6% of all radioactive waste volume in the UK. [ 1 ] High-level waste (HLW) is produced by nuclear reactors and the reprocessing of nuclear fuel. [ 41 ] The exact definition of HLW differs internationally. After a nuclear fuel rod serves one fuel cycle and is removed from the core, it is considered HLW. [ 42 ] Spent fuel rods contain mostly uranium with fission products and transuranic elements generated in the reactor core . Spent fuel is highly radioactive and often hot. HLW accounts for over 95% of the total radioactivity produced in the process of nuclear electricity generation but it contributes to less than 1% of volume of all radioactive waste produced in the UK. Overall, the 60-year-long nuclear program in the UK up until 2019 produced 2150 m 3 of HLW. [ 1 ] The radioactive waste from spent fuel rods consists primarily of cesium-137 and strontium-90, but it may also include plutonium, which can be considered transuranic waste. [ 38 ] The half-lives of these radioactive elements can differ quite extremely. Some elements, such as cesium-137 and strontium-90 have half-lives of approximately 30 years. Meanwhile, plutonium has a half-life that can stretch to as long as 24,000 years. [ 38 ] The amount of HLW worldwide is increasing by about 12,000 tonnes per year. [ 43 ] A 1000- megawatt nuclear power plant produces about 27 tonnes of spent nuclear fuel (unreprocessed) every year. [ 44 ] For comparison, the amount of ash produced by coal power plants in the United States is estimated at 130,000,000 t per year [ 45 ] and fly ash is estimated to release 100 times more radiation than an equivalent nuclear power plant. [ 46 ] In 2010, it was estimated that about 250,000 t of nuclear HLW were stored globally. [ 47 ] This does not include amounts that have escaped into the environment from accidents or tests. Japan is estimated to hold 17,000 t of HLW in storage in 2015. [ 48 ] As of 2019, the United States has over 90,000 t of HLW. [ 49 ] HLW have been shipped to other countries to be stored or reprocessed and, in some cases, shipped back as active fuel. The ongoing controversy over high-level radioactive waste disposal is a major constraint on nuclear power global expansion. [ 50 ] Most scientists agree that the main proposed long-term solution is deep geological burial, either in a mine or a deep borehole. [ 51 ] [ 52 ] As of 2019, no dedicated civilian high-level nuclear waste site is operational [ 50 ] as small amounts of HLW did not justify the investment in the past. Finland is in the advanced stage of the construction of the Onkalo spent nuclear fuel repository , which is planned to open in 2025 at 400–450 m depth. France is in the planning phase for a 500 m deep Cigeo facility in Bure. Sweden is planning a site in Forsmark . Canada plans a 680 m deep facility near Lake Huron in Ontario. The Republic of Korea plans to open a site around 2028. [ 1 ] The site in Sweden enjoys 80% support from local residents as of 2020. [ 53 ] The Morris Operation in Grundy County, Illinois , is currently the only de facto high-level radioactive waste storage site in the United States. Transuranic waste (TRUW) as defined by U.S. regulations is, without regard to form or origin, waste that is contaminated with alpha-emitting transuranic radionuclides with half-lives greater than 20 years and concentrations greater than 100 nCi /g (3.7 MBq /kg), excluding high-level waste. Elements that have an atomic number greater than uranium are called transuranic ("beyond uranium"). Because of their long half-lives, TRUW is disposed of more cautiously than either low- or intermediate-level waste. In the United States, it arises mainly from nuclear weapons production, and consists of clothing, tools, rags, residues, debris, and other items contaminated with small amounts of radioactive elements (mainly plutonium ). Under U.S. law, transuranic waste is further categorized into "contact-handled" (CH) and "remote-handled" (RH) on the basis of the radiation dose rate measured at the surface of the waste container. CH TRUW has a surface dose rate not greater than 200 mrem per hour (2 mSv/h), whereas RH TRUW has a surface dose rate of 200 mrem/h (2 mSv/h) or greater. CH TRUW does not have the very high radioactivity of high-level waste, nor its high heat generation, but RH TRUW can be highly radioactive, with surface dose rates up to 1,000,000 mrem/h (10,000 mSv/h). The United States currently disposes of TRUW generated from military facilities at the Waste Isolation Pilot Plant (WIPP) in a deep salt formation in New Mexico . [ 54 ] A future way to reduce waste accumulation is to phase out current reactors in favor of Generation IV reactors , which output less waste per power generated. Fast reactors such as BN-800 in Russia are also able to consume MOX fuel that is manufactured from recycled spent fuel from traditional reactors. [ 55 ] The UK's Nuclear Decommissioning Authority published a position paper in 2014 on the progress on approaches to the management of separated plutonium, which summarises the conclusions of the work that the NDA shared with the UK government. [ 56 ] Of particular concern in nuclear waste management are two long-lived fission products, Tc-99 (half-life 220,000 years) and I-129 (half-life 15.7 million years), which dominate spent fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are Np-237 (half-life two million years) and Pu-239 (half-life 24,000 years). [ 57 ] Nuclear waste requires sophisticated treatment and management to successfully isolate it from interacting with the biosphere . This usually necessitates treatment, followed by a long-term management strategy involving storage, disposal or transformation of the waste into a non-toxic form. [ 58 ] Governments around the world are considering a range of waste management and disposal options, though there has been limited progress toward long-term waste management solutions. [ 59 ] Several methods of disposal of radioactive waste have been investigated: [ 62 ] In the United States, waste management policy broke down with the ending of work on the incomplete Yucca Mountain Repository . [ 64 ] At present there are 70 nuclear power plant sites where spent fuel is stored. A Blue Ribbon Commission was appointed by U.S. President Obama to look into future options for this and future waste. A deep geological repository seems to be favored. [ 64 ] Ducrete , Saltcrete , and Synroc are methods for immobilizing nuclear waste. Maritime transport of radioactive waste on ships is regulated at sea by the INF Code . [ 65 ] Long-term storage of radioactive waste requires the stabilization of the waste into a form that will neither react nor degrade for extended periods. It is theorized that one way to do this might be through vitrification . [ 66 ] Currently at Sellafield , the high-level waste (PUREX first cycle raffinate ) is mixed with sugar and then calcined. Calcination involves passing the waste through a heated, rotating tube. The purposes of calcination are to evaporate the water from the waste and de-nitrate the fission products to assist the stability of the glass produced. [ 67 ] The 'calcine' generated is fed continuously into an induction heated furnace with fragmented glass . [ 68 ] The resulting glass is a new substance in which the waste products are bonded into the glass matrix when it solidifies. As a melt, this product is poured into stainless steel cylindrical containers ("cylinders") in a batch process. When cooled, the fluid solidifies ("vitrifies") into the glass. After being formed, the glass is highly resistant to water. [ 69 ] After filling a cylinder, a seal is welded onto the cylinder head. The cylinder is then washed. After being inspected for external contamination, the steel cylinder is stored, usually in an underground repository. In this form, the waste products are expected to be immobilized for thousands of years. [ 70 ] The glass inside a cylinder is usually a black glossy substance. All this work (in the United Kingdom) is done using hot cell systems. Sugar is added to control the ruthenium chemistry and to stop the formation of the volatile RuO 4 containing radioactive ruthenium isotopes . In the West, the glass is normally a borosilicate glass (similar to Pyrex ), while in the former Soviet Union it is normal to use a phosphate glass . [ 71 ] The amount of fission products in the glass must be limited because some ( palladium , the other Pt group metals, and tellurium ) tend to form metallic phases which separate from the glass. Bulk vitrification uses electrodes to melt soil and wastes, which are then buried underground. [ 72 ] In Germany, a vitrification plant is treating the waste from a small demonstration reprocessing plant which has since been closed. [ 67 ] [ 73 ] Vitrification is not the only way to stabilize the waste into a form that will not react or degrade for extended periods. Immobilization via direct incorporation into a phosphate-based crystalline ceramic host is also used. [ 74 ] The diverse chemistry of phosphate ceramics under various conditions demonstrates a versatile material that can withstand chemical, thermal, and radioactive degradation over time. The properties of phosphates, particularly ceramic phosphates, of stability over a wide pH range, low porosity, and minimization of secondary waste introduces possibilities for new waste immobilization techniques. It is common for medium active wastes in the nuclear industry to be treated with ion exchange or other means to concentrate the radioactivity into a small volume. The much less radioactive bulk (after treatment) is often then discharged. For instance, it is possible to use a ferric hydroxide floc to remove radioactive metals from aqueous mixtures. [ 75 ] After the radioisotopes are absorbed onto the ferric hydroxide, the resulting sludge can be placed in a metal drum before being mixed with cement to form solid waste. [ 76 ] In order to get better long-term performance (mechanical stability) from such forms, they may be made from a mixture of fly ash , or blast furnace slag , and portland cement , instead of normal concrete (made with portland cement, gravel and sand). The Australian Synroc (synthetic rock) is a more sophisticated way to immobilize such waste, and this process may eventually come into commercial use for civil wastes (it is currently being developed for U.S. military wastes). Synroc was invented by Ted Ringwood, a geochemist at the Australian National University . [ 77 ] The Synroc contains pyrochlore and cryptomelane type minerals. The original form of Synroc (Synroc C) was designed for the liquid high-level waste (PUREX raffinate) from a light-water reactor . The main minerals in this Synroc are hollandite (BaAl 2 Ti 6 O 16 ), zirconolite (CaZrTi 2 O 7 ) and perovskite (CaTiO 3 ). The zirconolite and perovskite are hosts for the actinides . The strontium and barium will be fixed in the perovskite. The caesium will be fixed in the hollandite. A Synroc waste treatment facility began construction in 2018 at ANSTO . [ 78 ] The time frame in question when dealing with radioactive waste ranges from 10,000 to 1,000,000 years, [ 79 ] according to studies based on the effect of estimated radiation doses. [ 80 ] Researchers suggest that forecasts of health detriment for such periods should be examined critically. [ 81 ] [ 82 ] Practical studies only consider up to 100 years as far as effective planning [ 83 ] and cost evaluations [ 84 ] are concerned. Long term behavior of radioactive wastes remains a subject for ongoing research projects in geoforecasting . [ 85 ] Algae has shown selectivity for strontium in studies, where most plants used in bioremediation have not shown selectivity between calcium and strontium, often becoming saturated with calcium, which is present in greater quantities in nuclear waste. Strontium-90 with a half life around 30 years, is classified as high-level waste. [ 86 ] Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus ( algae ) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater. [ 87 ] A study of the pond alga Closterium moniliferum using non-radioactive strontium found that varying the ratio of barium to strontium in water improved strontium selectivity. [ 86 ] Dry cask storage typically involves taking waste from a spent fuel pool and sealing it (along with an inert gas ) in a steel cylinder, which is placed in a concrete cylinder which acts as a radiation shield. It is a relatively inexpensive method which can be done at a central facility or adjacent to the source reactor. The waste can be easily retrieved for reprocessing. [ 88 ] The process of selecting appropriate deep final repositories for high-level waste and spent fuel is now underway in several countries with the first expected to be commissioned sometime after 2010. [ citation needed ] The basic concept is to locate a large, stable geologic formation and use mining technology to excavate a tunnel, or use large-bore tunnel boring machines (similar to those used to drill the Channel Tunnel from England to France) to drill a shaft 500 to 1,000 metres (1,600 to 3,300 ft) below the surface where rooms or vaults can be excavated for disposal of high-level radioactive waste. The goal is to permanently isolate nuclear waste from the human environment. Many people remain uncomfortable with the immediate stewardship cessation of this disposal system, suggesting perpetual management and monitoring would be more prudent. [ citation needed ] Because some radioactive species have half-lives longer than one million years, even very low container leakage and radionuclide migration rates must be taken into account. [ 90 ] Moreover, it may require more than one half-life until some nuclear materials lose enough radioactivity to cease being lethal to living things. A 1983 review of the Swedish radioactive waste disposal program by the National Academy of Sciences found that country's estimate of several hundred thousand years—perhaps up to one million years—being necessary for waste isolation "fully justified." [ 91 ] The proposed land-based subductive waste disposal method disposes of nuclear waste in a subduction zone accessed from land and therefore is not prohibited by international agreement. This method has been described as the most viable means of disposing of radioactive waste, [ 92 ] and as the state-of-the-art as of 2001 in nuclear waste disposal technology. [ 93 ] Another approach termed Remix & Return [ 94 ] would blend high-level waste with uranium mine and mill tailings down to the level of the original radioactivity of the uranium ore , then replace it in inactive uranium mines. This approach has the merits of providing jobs for miners who would double as disposal staff, and of facilitating a cradle-to-grave cycle for radioactive materials, but would be inappropriate for spent reactor fuel in the absence of reprocessing, due to the presence of highly toxic radioactive elements such as plutonium within it. Deep borehole disposal is the concept of disposing of high-level radioactive waste from nuclear reactors in extremely deep boreholes. Deep borehole disposal seeks to place the waste as much as 5 kilometres (3.1 mi) beneath the surface of the Earth and relies primarily on the immense natural geological barrier to confine the waste safely and permanently so that it should never pose a threat to the environment. The Earth's crust contains 120 trillion tons of thorium and 40 trillion tons of uranium (primarily at relatively trace concentrations of parts per million each adding up over the crust's 3 × 10 19 ton mass), among other natural radioisotopes. [ 95 ] [ 96 ] [ 97 ] Since the fraction of nuclides decaying per unit of time is inversely proportional to an isotope's half-life, the relative radioactivity of the lesser amount of human-produced radioisotopes (thousands of tons instead of trillions of tons) would diminish once the isotopes with far shorter half-lives than the bulk of natural radioisotopes decayed. In January 2013, Cumbria county council rejected UK central government proposals to start work on an underground storage dump for nuclear waste near to the Lake District National Park . "For any host community, there will be a substantial community benefits package and worth hundreds of millions of pounds" said Ed Davey, Energy Secretary, but nonetheless, the local elected body voted 7–3 against research continuing, after hearing evidence from independent geologists that "the fractured strata of the county was impossible to entrust with such dangerous material and a hazard lasting millennia." [ 98 ] [ 99 ] Horizontal drillhole disposal describes proposals to drill over one km vertically, and two km horizontally in the earth's crust, for the purpose of disposing of high-level waste forms such as spent nuclear fuel, Caesium-137, or Strontium-90. After the emplacement and the retrievability period, [ clarification needed ] drillholes would be backfilled and sealed. A series of tests of the technology were carried out in November 2018 and then again publicly in January 2019 by a U.S. based private company. [ 100 ] The test demonstrated the emplacement of a test-canister in a horizontal drillhole and retrieval of the same canister. There was no actual high-level waste used in the test. [ 101 ] [ 102 ] The European Commission Joint Research Centre report of 2021 (see above) concluded: [ 103 ] Management of radioactive waste and its safe and secure disposal is a necessary step in the lifecycle of all applications of nuclear science and technology (nuclear energy, research, industry, education, medical, and others). Radioactive waste is therefore generated in practically every country, the largest contribution coming from the nuclear energy lifecycle in countries operating nuclear power plants. Presently, there is broad scientific and technical consensus that disposal of high-level, long-lived radioactive waste in deep geologic formations is, at the state of today’s knowledge, considered as an appropriate and safe means of isolating it from the biosphere for very long time scales. From 1946 through 1993, thirteen countries used ocean disposal or ocean dumping as a method to dispose of nuclear/radioactive waste with an approximation of 200,000 tons sourcing mainly from the medical, research and nuclear industry. [ 104 ] Ocean floor disposal of radioactive waste has been suggested by the finding that deep waters in the North Atlantic Ocean do not present an exchange with shallow waters for about 140 years based on oxygen content data recorded over a period of 25 years. [ 105 ] They include burial beneath a stable abyssal plain , burial in a subduction zone that would slowly carry the waste downward into the Earth's mantle , [ 106 ] [ 107 ] and burial beneath a remote natural or human-made island. While these approaches all have merit and would facilitate an international solution to the problem of disposal of radioactive waste, they would require an amendment of the Law of the Sea . [ 108 ] Nuclear submarines have been lost and these vessels reactors must also be counted in the amount of radioactive waste deposited at sea. Article 1 (Definitions), 7., of the 1996 Protocol to the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter, (the London Dumping Convention) states: There have been proposals for reactors that consume nuclear waste and transmute it to other, less-harmful or shorter-lived, nuclear waste. In particular, the integral fast reactor was a proposed nuclear reactor with a nuclear fuel cycle that produced no transuranic waste and, in fact, could consume transuranic waste. It proceeded as far as large-scale tests but was eventually canceled by the U.S. Government. Another approach, considered safer but requiring more development, is to dedicate subcritical reactors to the transmutation of the left-over transuranic elements. An isotope that is found in nuclear waste and that represents a concern in terms of proliferation is Pu-239. The large stock of plutonium is a result of its production inside uranium-fueled reactors and of the reprocessing of weapons-grade plutonium during the weapons program. An option for getting rid of this plutonium is to use it as a fuel in a traditional light-water reactors (LWR). Several fuel types with differing plutonium destruction efficiencies are under study. Transmutation was banned in the United States in April 1977 by U. S. President Carter due to the danger of plutonium proliferation, [ 109 ] but President Reagan rescinded the ban in 1981. [ 110 ] Due to economic losses and risks, the construction of reprocessing plants during this time did not resume. Due to high energy demand, work on the method has continued in the European Union (EU). This has resulted in a practical nuclear research reactor called Myrrha in which transmutation is possible. Additionally, a new research program called ACTINET has been started in the EU to make transmutation possible on an industrial scale. According to U. S. President Bush's Global Nuclear Energy Partnership (GNEP) of 2007, the United States is actively promoting research on transmutation technologies needed to markedly reduce the problem of nuclear waste treatment. [ 111 ] There have also been theoretical studies involving the use of fusion reactors as so-called "actinide burners" where a fusion reactor plasma such as in a tokamak , could be "doped" with a small amount of the "minor" transuranic atoms which would be transmuted (meaning fissioned in the actinide case) to lighter elements upon their successive bombardment by the very high energy neutrons produced by the fusion of deuterium and tritium in the reactor. A study at MIT found that only 2 or 3 fusion reactors with parameters similar to that of the International Thermonuclear Experimental Reactor (ITER) could transmute the entire annual minor actinide production from all of the light-water reactors presently operating in the United States fleet while simultaneously generating approximately 1 gigawatt of power from each reactor. [ 112 ] 2018 Nobel Prize for Physics -winner Gérard Mourou has proposed using chirped pulse amplification to generate high-energy and low-duration laser pulses either to accelerate deuterons into a tritium target causing fusion events yielding fast neutrons, or accelerating protons for neutron spallation , with either method intended for transmutation of nuclear waste. [ 113 ] [ 114 ] [ 115 ] Spent nuclear fuel contains abundant fertile uranium and traces of fissile materials. [ 20 ] Methods such as the PUREX process can be used to remove useful actinides for the production of active nuclear fuel. Another option is to find applications for the isotopes in nuclear waste so as to re-use them. [ 116 ] Already, caesium-137, strontium-90 and a few other isotopes are extracted for certain industrial applications such as food irradiation and radioisotope thermoelectric generators . While re-use does not eliminate the need to manage radioisotopes, it can reduce the quantity of waste produced. The Nuclear Assisted Hydrocarbon Production Method, [ 117 ] Canadian patent application 2,659,302, is a method for the temporary or permanent storage of nuclear waste materials comprising the placing of waste materials into one or more repositories or boreholes constructed into an unconventional oil formation. The thermal flux of the waste materials fractures the formation and alters the chemical and/or physical properties of hydrocarbon material within the subterranean formation to allow removal of the altered material. A mixture of hydrocarbons, hydrogen, and/or other formation fluids is produced from the formation. The radioactivity of high-level radioactive waste affords proliferation resistance to plutonium placed in the periphery of the repository or the deepest portion of a borehole. Breeder reactors can run on U-238 and transuranic elements, which comprise the majority of spent fuel radioactivity in the 1,000–100,000-year time span. Space disposal is attractive because it removes nuclear waste from the planet. It has significant disadvantages, such as the potential for catastrophic failure of a launch vehicle , which could spread radioactive material into the atmosphere and around the world. A high number of launches would be required because no individual rocket would be able to carry very much of the material relative to the total amount that needs to be disposed. This makes the proposal economically impractical and increases the risk of one or more launch failures. [ 118 ] To further complicate matters, international agreements on the regulation of such a program would need to be established. [ 119 ] Costs and inadequate reliability of modern rocket launch systems for space disposal has been one of the motives for interest in non-rocket spacelaunch systems such as mass drivers , space elevators , and other proposals. [ 120 ] Sweden and Finland are furthest along in committing to a particular disposal technology, while many others reprocess spent fuel or contract with France or Great Britain to do it, taking back the resulting plutonium and high-level waste. "An increasing backlog of plutonium from reprocessing is developing in many countries... It is doubtful that reprocessing makes economic sense in the present environment of cheap uranium." [ 121 ] In many European countries (e.g., Britain, Finland, the Netherlands, Sweden, and Switzerland) the risk or dose limit for a member of the public exposed to radiation from a future high-level nuclear waste facility is considerably more stringent than that suggested by the International Commission on Radiation Protection or proposed in the United States. European limits are often more stringent than the standard suggested in 1990 by the International Commission on Radiation Protection by a factor of 20, and more stringent by a factor of ten than the standard proposed by the U.S. Environmental Protection Agency (EPA) for the Yucca Mountain nuclear waste repository for the first 10,000 years after closure. [ 122 ] The U.S. EPA's proposed standard for greater than 10,000 years is 250 times more permissive than the European limit. [ 122 ] The U.S. EPA proposed a legal limit of a maximum of 3.5 millisieverts (350 millirem ) each annually to local individuals after 10,000 years, which would be up to several percent of [ vague ] the exposure currently received by some populations in the highest natural background regions on Earth, though the United States Department of Energy (DOE) predicted that received dose would be much below that limit . [ 123 ] Over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km 2 ) by approximately 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, but the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average. [ 124 ] After serious opposition about plans and negotiations between Mongolia with Japan and the United States to build nuclear-waste facilities in Mongolia, Mongolia stopped all negotiations in September 2011. These negotiations had started after U.S. Deputy Secretary of Energy Daniel Poneman visited Mongolia in September 2010. Talks took place in Washington, D.C. between officials of Japan, the United States, and Mongolia in February 2011. After this the United Arab Emirates (UAE), which wanted to buy nuclear fuel from Mongolia, joined in the negotiations. The talks were kept secret and, although the Mainichi Daily News reported on them in May, Mongolia officially denied the existence of these negotiations. Alarmed by this news, Mongolian citizens protested against the plans and demanded the government withdraw the plans and disclose information. The Mongolian President Tsakhiagiin Elbegdorj issued a presidential order on September 13 banning all negotiations with foreign governments or international organizations on nuclear-waste storage plans in Mongolia. [ 125 ] The Mongolian government has accused the newspaper of distributing false claims around the world. After the presidential order, the Mongolian president fired the individual who was supposedly involved in these conversations. Authorities in Italy are investigating a 'Ndrangheta mafia clan accused of trafficking and illegally dumping nuclear waste. According to a whistleblower , a manager of the Italy state energy research agency Enea paid the clan to get rid of 600 drums of toxic and radioactive waste from Italy, Switzerland, France, Germany, and the United States, with Somalia as the destination, where the waste was buried after buying off local politicians. Former employees of Enea are suspected of paying the criminals to take waste off their hands in the 1980s and 1990s. Shipments to Somalia continued into the 1990s, while the 'Ndrangheta clan also blew up shiploads of waste, including radioactive hospital waste, sending them to the sea bed off the Calabrian coast. [ 126 ] According to the environmental group Legambiente , former members of the 'Ndrangheta have said that they were paid to sink ships with radioactive material for the last 20 years. [ 127 ] In 2008, Afghan authorities accused Pakistan of illegally dumping nuclear waste in the southern parts of Afghanistan when the Taliban were in power between 1996 and 2001 . [ 128 ] The Pakistani government denied the allegation. A few incidents have occurred when radioactive material was disposed of improperly, shielding during transport was defective, or when it was simply abandoned or even stolen from a waste store. [ 129 ] In the Soviet Union, waste stored in Lake Karachay was blown over the area during a dust storm after the lake had partly dried out. [ 130 ] In Italy, several radioactive waste deposits let material flow into river water, thus contaminating water for domestic use. [ 131 ] In France in the summer of 2008, numerous incidents happened: [ 132 ] in one, at the Areva plant in Tricastin , it was reported that, during a draining operation, liquid containing untreated uranium overflowed out of a faulty tank and about 75 kg of the radioactive material seeped into the ground and, from there, into two rivers nearby; [ 133 ] in another case, over 100 staff were contaminated with low doses of radiation. [ 134 ] There are ongoing concerns around the deterioration of the nuclear waste site on the Enewetak Atoll of the Marshall Islands and a potential radioactive spill. [ 135 ] Scavenging of abandoned radioactive material has been the cause of several other cases of radiation exposure , mostly in developing nations , which may have less regulation of dangerous substances (and sometimes less general education about radioactivity and its hazards) and a market for scavenged goods and scrap metal. The scavengers and those who buy the material are almost always unaware that the material is radioactive and it is selected for its aesthetics or scrap value. [ 136 ] Irresponsibility on the part of the radioactive material's owners, usually a hospital, university, or military, and the absence of regulation concerning radioactive waste, or a lack of enforcement of such regulations, have been significant factors in radiation exposures. For an example of an accident involving radioactive scrap originating from a hospital, see the Goiânia accident . [ 136 ] Transportation accidents involving spent nuclear fuel from power plants are unlikely to have serious consequences due to the strength of the spent nuclear fuel shipping casks . [ 137 ] On 15 December 2011, top government spokesman Osamu Fujimura of the Japanese government admitted that nuclear substances were found in the waste of Japanese nuclear facilities. Although Japan did commit itself in 1977 to inspections in the safeguard agreement with the IAEA, the reports were kept secret for the inspectors of the International Atomic Energy Agency. [ citation needed ] Japan did start discussions with the IAEA about the large quantities of enriched uranium and plutonium that were discovered in nuclear waste cleared away by Japanese nuclear operators. [ citation needed ] At the press conference Fujimura said: "Based on investigations so far, most nuclear substances have been properly managed as waste, and from that perspective, there is no problem in safety management," but according to him, the matter was at that moment still being investigated. [ 138 ]
https://en.wikipedia.org/wiki/Radioactive_waste
Radioactivity or radionuclide fixatives are specialized polymer coatings used to “fix” radioactive isotopes or radioactive material to surfaces. These fixatives, also known as permanent coatings in the radioactive contamination control field, have been used for many decades in facilities processing radioactive material to control radioactive contamination . There has been increased interest in these fixatives or coatings recently due to the growing concern of contamination from a radioactivity dispersal device (RDD also known as a dirty bomb ) and because radioactivity fixatives in use today lose the ability to contain the radioactivity to the surface during a fire. Radioactivity fixatives reduce or eliminate the movement of radionuclides from surfaces thereby lowering the health risk of inhalation or other exposure to radioactive isotopes. There are many articles on the use of radioactive fixatives with a review article [ 1 ] from 1983 often used as a reference. A more recent review article [ 2 ] looks at the use of these radioactive fixatives for use after the detonation of a RDD. Current research is investigating new coatings that are effective at containing radioactive material to the surface during and after fires. This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radioactivity_Fixatives
Radioactivity is generally used in life sciences for highly sensitive and direct measurements of biological phenomena, and for visualizing the location of biomolecules radiolabelled with a radioisotope . All atoms exist as stable or unstable isotopes and the latter decay at a given half-life ranging from attoseconds to billions of years; radioisotopes useful to biological and experimental systems have half-lives ranging from minutes to months. In the case of the hydrogen isotope tritium (half-life = 12.3 years) and carbon-14 (half-life = 5,730 years), these isotopes derive their importance from all organic life containing hydrogen and carbon and therefore can be used to study countless living processes, reactions, and phenomena. Most short lived isotopes are produced in cyclotrons , linear particle accelerators , or nuclear reactors and their relatively short half-lives give them high maximum theoretical specific activities which is useful for detection in biological systems. Radiolabeling is a technique used to track the passage of a molecule that incorporates a radioisotope through a reaction, metabolic pathway, cell, tissue, organism, or biological system. The reactant is 'labeled' by replacing specific atoms by their isotope. Replacing an atom with its own radioisotope is an intrinsic label that does not alter the structure of the molecule. Alternatively, molecules can be radiolabeled by chemical reactions that introduce an atom, moiety , or functional group that contains a radionuclide . For example, radio-iodination of peptides and proteins with biologically useful iodine isotopes is easily done by an oxidation reaction that replaces the hydroxyl group with iodine on tyrosine and histadine residues. Another example is to use chelators such DOTA that can be chemically coupled to a protein; the chelator in turn traps radiometals thus radiolabeling the protein. This has been used for introducing Yttrium-90 onto a monoclonal antibody for therapeutic purposes and for introducing Gallium-68 onto the peptide Octreotide for diagnostic imaging by PET imaging . [ 1 ] (See DOTA uses .) Radiolabeling is not necessary for some applications. For some purposes, soluble ionic salts can be used directly without further modification (e.g., gallium-67 , gallium-68 , and radioiodine isotopes). These uses rely on the chemical and biological properties of the radioisotope itself, to localize it within the organism or biological system. Molecular imaging is the biomedical field that employs radiotracers to visualize and quantify biological processes using positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging. Again, a key feature of using radioactivity in life science applications is that it is a quantitative technique, so PET/SPECT not only reveals where a radiolabelled molecule is but how much is there. Radiobiology (also known as radiation biology) is a field of clinical and basic medical sciences that involves the study of the action of radioactivity on biological systems. The controlled action of deleterious radioactivity on living systems is the basis of radiation therapy . Tritium (hydrogen-3) is a very low beta energy emitter that can be used to label proteins , nucleic acids , drugs and almost any organic biomolecule. The maximum theoretical specific activity of tritium is 28.8 kCi / mol (1,070 TBq /mol). [ 2 ] However, there is often more than one tritium atom per molecule: for example, tritiated UTP is sold by most suppliers with carbons 5 and 6 each bonded to a tritium atom. For tritium detection, liquid scintillation counters have been classically employed, in which the energy of a tritium decay is transferred to a scintillant molecule in solution which in turn gives off photons whose intensity and spectrum can be measured by a photomultiplier array. The efficiency of this process is 4–50%, depending on the scintillation cocktail used. [ 3 ] [ 4 ] The measurements are typically expressed in counts per minute (CPM) or disintegrations per minute (DPM). Alternatively, a solid-state, tritium-specific phosphor screen can be used together with a phosphorimager to measure and simultaneously image the radiotracer. [ 5 ] Measurements/images are digital in nature and can be expressed in intensity or densitometry units within a region of interest (ROI). Carbon-14 has a long half-life of 5730 ± 40 years . Its maximum specific activity is 0.0624 kCi/mol (2.31 TBq/mol). It is used in applications such as radiometric dating or drug tests. [ 6 ] Carbon-14 labeling is common in drug development to do ADME (absorption, distribution, metabolism and excretion) studies in animal models and in human toxicology and clinical trials. Since tritium exchange may occur in some radiolabeled compounds, this does not happen with carbon-14 and may thus be preferred. Sodium-22 and chlorine-36 are commonly used to study ion transporters . However, sodium-22 is hard to screen off and chlorine-36, with a half-life of 300,000 years, has low activity. [ 7 ] Sulfur-35 is used to label proteins and nucleic acids. Cysteine is an amino acid containing a thiol group which can be labeled by sulfur-35. For nucleotides that do not contain a sulfur group, the oxygen on one of the phosphate groups can be substituted with a sulfur. This thiophosphate acts the same as a normal phosphate group, although there is a slight bias against it by most polymerases . The maximum theoretical specific activity is 1,494 kCi/mol (55.3 PBq/mol). Phosphorus-32 is widely used for labeling nucleic acids and phosphoproteins. It has the highest emission energy (1.7 MeV) of all common research radioisotopes. This is a major advantage in experiments for which sensitivity is a primary consideration, such as titrations of very strong interactions ( i.e. , very low dissociation constant ), footprinting experiments, and detection of low-abundance phosphorylated species. Phosphorus-32 is also relatively inexpensive. Because of its high energy, however, its safe use requires a number of engineering controls ( e.g. , acrylic glass ) and administrative controls . The half-life of phosphorus-32 is 14.2 days, and its maximum specific activity is 9,131 kCi/mol (337.8 PBq/mol). Phosphorus-33 is used to label nucleotides. It is less energetic than phosphorus-32 and does not require protection with plexiglass . A disadvantage is its higher cost compared to phosphorus-32, as most of the bombarded phosphorus-31 will have acquired only one neutron , while only some will have acquired two or more. Its maximum specific activity is 5,118 kCi/mol (189.4 PBq/mol). Iodine-125 is commonly used for labeling proteins, usually at tyrosine residues. Unbound iodine is volatile and must be handled in a fume hood. Its maximum specific activity is 2,176 kCi/mol (80.5 PBq/mol). A good example of the difference in energy of the various radionuclei is the detection window ranges used to detect them, which are generally proportional to the energy of the emission, but vary from machine to machine: in a Perkin elmer TriLux Beta scintillation counter , the hydrogen-3 energy range window is between channel 5–360; carbon-14, sulfur-35 and phosphorus-33 are in the window of 361–660; and phosphorus-32 is in the window of 661–1024. [ citation needed ] In liquid scintillation counting , a small aliquot, filter or swab is added to scintillation fluid and the plate or vial is placed in a scintillation counter to measure the radioactive emissions. Manufacturers have incorporated solid scintillants into multi-well plates to eliminate the need for scintillation fluid and make this into a high-throughput technique. A gamma counter is similar in format to scintillation counting but it detects gamma emissions directly and does not require a scintillant. A Geiger counter is a quick and rough approximation of activity. Lower energy emitters such as tritium can not be detected. Autoradiography : A tissue section affixed to a microscope slide or a membrane such as a Northern blot or a hybridized slot blot can be placed against x-ray film or phosphor screens to acquire a photographic or digital image. The density of exposure, if calibrated, can supply exacting quantitative information. Phosphor storage screen : The slide or membrane is placed against a phosphor screen which is then scanned in a phosphorimager . This is many times faster than film/emulsion techniques and outputs data in a digital form, thus it has largely replaced film/emulsion techniques. Electron microscopy : The sample is not exposed to a beam of electrons but detectors picks up the expelled electrons from the radionuclei. Micro-autoradiography: A tissue section, typically cryosectioned, is placed against a phosphor screen as above. Quantitative Whole Body Autoradiography (QWBA): Larger than micro-autoradiography, whole animals, typically rodents, can be analyzed for biodistribution studies. Schild regression is a radioligand binding assay. It is used for DNA labelling (5' and 3'), leaving the nucleic acids intact. A vial of radiolabel has a "total activity". Taking as an example γ32P ATP , from the catalogues of the two major suppliers, Perkin Elmer NEG502H500UC or GE AA0068-500UCI, in this case, the total activity is 500 μCi (other typical numbers are 250 μCi or 1 mCi). This is contained in a certain volume, depending on the radioactive concentration, such as 5 to 10 mCi/mL (185 to 370 TBq/m 3 ); typical volumes include 50 or 25 μL. Not all molecules in the solution have a P-32 on the last (i.e., gamma) phosphate: the "specific activity" gives the radioactivity concentration and depends on the radionuclei's half-life. If every molecule were labelled, the maximum theoretical specific activity is obtained that for P-32 is 9131 Ci/mmol. Due to pre-calibration and efficiency issues this number is never seen on a label; the values often found are 800, 3000 and 6000 Ci/mmol. With this number it is possible to calculate the total chemical concentration and the hot-to-cold ratio. "Calibration date" is the date in which the vial's activity is the same as on the label. "Pre-calibration" is when the activity is calibrated in a future date to compensate for the decay occurred during shipping. Prior to the widespread use of fluorescence in the past three decades radioactivity was the most common label. The primary advantage of fluorescence over radiotracers is that it does not require radiological controls and their associated expenses and safety measures. The decay of radioisotopes may limit the shelf life of a reagent, requiring its replacement and thus increasing expenses. Several fluorescent molecules can be used simultaneously (given that they do not overlap, cf. FRET), whereas with radioactivity two isotopes can be used (tritium and a low energy isotope, e.g. 33 P due to different intensities) but require special equipment (a tritium screen and a regular phosphor-imaging screen, a specific dual channel detector, e.g. [1] ). Fluorescence is not necessary easier or more convenient to use because fluorescence requires specialized equipment of its own and because quenching makes absolute and/or reproducible quantification difficult. The primary disadvantage of fluorescence versus radiotracers is a significant biological problem: chemically tagging a molecule with a fluorescent dye radically changes the structure of the molecule, which in turn can radically change the way that molecule interacts with other molecules. In contrast, intrinsic radiolabeling of a molecule can be done without altering its structure in any way. For example, substituting a H-3 for a hydrogen atom or C-14 for a carbon atom does not change the conformation, structure, or any other property of the molecule, it's just switching forms of the same atom. Thus an intrinsically radiolabeled molecule is identical to its unlabeled counterpart. Measurement of biological phenomena by radiotracers is always direct. In contrast, many life science fluorescence applications are indirect, consisting of a fluorescent dye increasing, decreasing, or shifting in wavelength emission upon binding to the molecule of interest. If good health physics controls are maintained in a laboratory where radionuclides are used, it is unlikely that the overall radiation dose received by workers will be of much significance. Nevertheless, the effects of low doses are mostly unknown so many regulations exist to avoid unnecessary risks, such as skin or internal exposure. Due to the low penetration power and many variables involved it is hard to convert a radioactive concentration to a dose. 1 μCi of P-32 on a square centimetre of skin (through a dead layer of a thickness of 70 μm) gives 7961 rads (79.61 grays ) per hour. Similarly a mammogram gives an exposure of 300 mrem (3 mSv ) on a larger volume (in the US, the average annual dose is 620 mrem or 6.2 mSv [ 8 ] ).
https://en.wikipedia.org/wiki/Radioactivity_in_the_life_sciences
A radioallergosorbent test ( RAST ) is a blood test using radioimmunoassay test to detect specific IgE antibodies in order to determine the substances a subject is allergic to. This is different from a skin allergy test , which determines allergy by the reaction of a person's skin to different substances. [ citation needed ] The two most commonly used methods of confirming allergen sensitization are skin testing and allergy blood testing. Both methods are recommended by the NIH guidelines and have similar diagnostic value in terms of sensitivity and specificity. [ 1 ] [ 2 ] Advantages of the allergy blood test range from: excellent reproducibility across the full measuring range of the calibration curve, it has very high specificity as it binds to allergen specific IgE, and extremely sensitive too, when compared with skin prick testing. In general, this method of blood testing (in-vitro, out of body) vs skin-prick testing (in-vivo, in body) has a major advantage: it is not always necessary to remove the patient from an antihistamine medication regimen, and if the skin conditions (such as eczema ) are so widespread that allergy skin testing cannot be done. Allergy blood tests, such as ImmunoCAP, are performed without procedure variations, and the results are of excellent standardization. [ 3 ] Adults and children of any age can take an allergy blood test. For babies and very young children, a single needle stick for allergy blood testing is often more gentle than several skin tests. However, skin testing techniques have improved. Most skin testing does not involve needles and typically skin testing results in minimal patient discomfort. [ citation needed ] Drawbacks to RAST and ImmunoCAP techniques do exist. Compared to skin testing, ImmunoCAP and other RAST techniques take longer to perform and are less cost effective. [ 4 ] Several studies have also found these tests to be less sensitive than skin testing for the detection of clinically relevant allergies. [ 5 ] False positive results may be obtained due to cross-reactivity of homologous proteins or by cross-reactive carbohydrate determinants (CCDs). [ 6 ] In the NIH food guidelines issued in December 2010 it was stated that "The predictive values associated with clinical evidence of allergy for ImmunoCAP cannot be applied to other test methods." [ 7 ] With over 4000 scientific articles using ImmunoCAP and showing its clinical value, ImmunoCAP is perceived as "Gold standard" for in vitro IgE testing [ 8 ] [ 9 ] The RAST is a radioimmunoassay test to detect specific IgE antibodies to suspected or known allergens for the purpose of guiding a diagnosis about allergy. [ 10 ] [ 11 ] IgE is the antibody associated with Type I allergic response : for example, if a person exhibits a high level of IgE directed against pollen , the test may indicate the person is allergic to pollen (or pollen-like) proteins. A person who has outgrown an allergy may still have a positive IgE years after exposure. [ citation needed ] The suspected allergen is bound to an insoluble material and the patient's serum is added. If the serum contains antibodies to the allergen, those antibodies will bind to the allergen. Radiolabeled anti-human IgE antibody is added where it binds to those IgE antibodies already bound to the insoluble material. The unbound anti-human IgE antibodies are washed away. The amount of radioactivity is proportional to the serum IgE for the allergen. [ 12 ] RASTs are often used to test for allergies when: The RAST is scored on a scale from 0 to 6: The market-leading RAST methodology was invented and marketed in 1974 by Pharmacia Diagnostics AB , Uppsala, Sweden, and the acronym RAST is actually a brand name. In 1989, Pharmacia Diagnostics AB replaced it with a superior test named the ImmunoCAP Specific IgE blood test, which literature may also describe as: CAP RAST, CAP FEIA ( fluorenzymeimmunoassay ), and Pharmacia CAP. A review of applicable quality assessment programs shows that this new test has replaced the original RAST in approximately 80% of the world's commercial clinical laboratories, where specific IgE testing is performed. The newest version, the ImmunoCAP Specific IgE 0–100, is the only specific IgE assay to receive FDA approval to quantitatively report to its detection limit of 0.1kU/L. This clearance is based on the CLSI/NCCLS-17A Limits of Detection and Limits of Quantitation, October 2004 guideline. [ citation needed ] The guidelines for diagnosis and management of food allergy issues by the National Institute of Health state that: In 2010 the United States National Institute of Allergy and Infectious Diseases recommended that the RAST measurements of specific immunoglobulin E for the diagnosis of allergy be abandoned in favor of testing with more sensitive fluorescence enzyme-labeled assays. [ 13 ]
https://en.wikipedia.org/wiki/Radioallergosorbent_test
Radioanalytical chemistry focuses on the analysis of sample for their radionuclide content. Various methods are employed to purify and identify the radioelement of interest through chemical methods and sample measurement techniques. The field of radioanalytical chemistry was originally developed by Marie Curie with contributions by Ernest Rutherford and Frederick Soddy . They developed chemical separation and radiation measurement techniques on terrestrial radioactive substances. During the twenty years that followed 1897 the concepts of radionuclides was born. [ 1 ] Since Curie's time, applications of radioanalytical chemistry have proliferated. Modern advances in nuclear and radiochemistry research have allowed practitioners to apply chemistry and nuclear procedures to elucidate nuclear properties and reactions, used radioactive substances as tracers , and measure radionuclides in many different types of samples. [ 2 ] The importance of radioanalytical chemistry spans many fields including chemistry , physics , medicine , pharmacology , biology , ecology , hydrology , geology , forensics , atmospheric sciences , health protection, archeology , and engineering . Applications include: forming and characterizing new elements, determining the age of materials, and creating radioactive reagents for specific tracer use in tissues and organs. The ongoing goal of radioanalytical researchers is to develop more radionuclides and lower concentrations in people and the environment. Alpha decay is characterized by the emission of an alpha particle, a 4 He nucleus. The mode of this decay causes the parent nucleus to decrease by two protons and two neutrons. This type of decay follows the relation: Z A X → Z − 2 A − 4 Y + 2 4 α {\displaystyle {}_{Z}^{A}\!X\to {}_{Z-2}^{A-4}\!Y+{}_{2}^{4}\alpha } [ 3 ] Beta decay is characterized by the emission of a neutrino and a negatron which is equivalent to an electron . This process occurs when a nucleus has an excess of neutrons with respect to protons, as compared to the stable isobar . This type of transition converts a neutron into a proton; similarly, a positron is released when a proton is converted into a neutron. These decays follows the relation: Z A X → Z + 1 A Y + ν ¯ + β − {\displaystyle {}_{Z}^{A}\!X\to {}_{Z+1}^{A}\!Y+{\bar {\nu }}+\beta ^{-}} Z A X → Z − 1 A Y + ν + β + {\displaystyle {}_{Z}^{A}\!X\to {}_{Z-1}^{A}\!Y+\nu +\beta ^{+}} [ 4 ] Gamma ray emission follows the previously discussed modes of decay when the decay leaves a daughter nucleus in an excited state. This nucleus is capable of further de-excitation to a lower energy state by the release of a photon. This decay follows the relation: A X ∗ → A Y + γ {\displaystyle {}^{A}\!X^{*}\to {}^{A}\!Y+\gamma } [ 5 ] Gaseous ionization detectors collect and record the electrons freed from gaseous atoms and molecules by the interaction of radiation released by the source. A voltage potential is applied between two electrodes within a sealed system. Since the gaseous atoms are ionized after they interact with radiation they are attracted to the anode which produces a signal. It is important to vary the applied voltage such that the response falls within a critical proportional range. The operating principle of Semiconductor detectors is similar to gas ionization detectors: except that instead of ionization of gas atoms, free electrons and holes are produced which create a signal at the electrodes. The advantage of solid state detectors is the greater resolution of the resultant energy spectrum. Usually NaI(Tl) detectors are used; for more precise applications Ge(Li) and Si(Li) detectors have been developed. For extra sensitive measurements high-pure germanium detectors are used under a liquid nitrogen environment. [ 6 ] Scintillation detectors uses a photo luminescent source (such as ZnS) which interacts with radiation. When a radioactive particle decays and strikes the photo luminescent material a photon is released. This photon is multiplied in a photomultiplier tube which converts light into an electrical signal. This signal is then processed and converted into a channel. By comparing the number of counts to the energy level (typically in keV or MeV) the type of decay can be determined. Due to radioactive nucleotides have similar properties to their stable, inactive, counterparts similar analytical chemistry separation techniques can be used. These separation methods include precipitation , Ion Exchange , Liquid Liquid extraction, Solid Phase extraction, Distillation , and Electrodeposition . Samples with very low concentrations are difficult to measure accurately due to the radioactive atoms unexpectedly depositing on surfaces. Sample loss at trace levels may be due to adhesion to container walls and filter surface sites by ionic or electrostatic adsorption, as well as metal foils and glass slides. Sample loss is an ever present concern, especially at the beginning of the analysis path where sequential steps may compound these losses. Various solutions are known to circumvent these losses which include adding an inactive carrier or adding a tracer. Research has also shown that pretreatment of glassware and plastic surfaces can reduce radionuclide sorption by saturating the sites. [ 7 ] Since small amounts of radionuclides are typically being analyzed, the mechanics of manipulating tiny quantities is challenging. This problem is classically addressed by the use of carrier ions. Thus, carrier addition involves the addition of a known mass of stable ion to radionuclide-containing sample solution. The carrier is of the identical element but is non-radioactive. The carrier and the radionuclide of interest have identical chemical properties. Typically the amount of carrier added is conventionally selected for the ease of weighing such that the accuracy of the resultant weight is within 1%. For alpha particles, special techniques must be applied to obtain the required thin sample sources. The use of carries was heavily used by Marie Curie and was employed in the first demonstration of nuclear fission . [ 8 ] Isotope dilution is the reverse of tracer addition. It involves the addition of a known (small) amount of radionuclide to the sample that contains a known stable element. This additive is the "tracer." It is added at the start of the analysis procedure. After the final measurements are recorded, sample loss can be determined quantitatively. This procedure avoids the need for any quantitative recovery, greatly simplifying the analytical process. As this is an analytical chemistry technique quality control is an important factor to maintain. A laboratory must produce trustworthy results. This can be accomplished by a laboratories continual effort to maintain instrument calibration , measurement reproducibility, and applicability of analytical methods. [ 9 ] In all laboratories there must be a quality assurance plan. This plan describes the quality system and procedures in place to obtain consistent results. Such results must be authentic, appropriately documented, and technically defensible." [ 10 ] Such elements of quality assurance include organization, personnel training, laboratory operating procedures, procurement documents, chain of custody records, standard certificates, analytical records, standard procedures, QC sample analysis program and results, instrument testing and maintenance records, results of performance demonstration projects, results of data assessment, audit reports, and record retention policies. The cost of quality assurance is continually on the rise but the benefits far outweigh this cost. The average quality assurance workload was risen from 10% to a modern load of 20-30%. This heightened focus on quality assurance ensures that quality measurements that are reliable are achieved. The cost of failure far outweighs the cost of prevention and appraisal. Finally, results must be scientifically defensible by adhering to stringent regulations in the event of a lawsuit.
https://en.wikipedia.org/wiki/Radioanalytical_chemistry
A radiobinding assay is a method of detecting and quantifying antibodies targeted toward a specific antigen . As such, it can be seen as the inverse of radioimmunoassay , which quantifies an antigen by use of corresponding antibodies. The corresponding antigen is radiolabeled and mixed with the fluid that may contain the antibody, such as blood serum from a person. Presence of antibodies causes precipitation of antibody-antigen complexes that can be collected by centrifugation into pellets. The amount of antibody is proportional to the radioactivity of the pellet, as determined by gamma counting . [ 1 ] It is used to detect most autoantibodies seen in latent autoimmune diabetes . [ 2 ] This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiobinding_assay
Radiobiology (also known as radiation biology , and uncommonly as actinobiology ) is a field of clinical and basic medical sciences that involves the study of the effects of radiation on living tissue [ 1 ] (including ionizing and non-ionizing radiation ), [ 2 ] [ 3 ] in particular health effects of radiation . Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis . Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns , and/or rapid fatality through acute radiation syndrome . Controlled doses are used for medical imaging and radiotherapy . In general, ionizing radiation is harmful and potentially lethal to living beings but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis . Most adverse health effects of radiation exposure may be grouped in two general categories: Some effects of ionizing radiation on human health are stochastic , meaning that their probability of occurrence increases with dose, while the severity is independent of dose. [ 5 ] Radiation-induced cancer , teratogenesis , cognitive decline , and heart disease are all stochastic effects induced by ionizing radiation. Its most common impact is the stochastic induction of cancer with a latent period of years or decades after exposure. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert . [ 6 ] If this linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second. Quantitative data on the effects of ionizing radiation on human health is relatively limited compared to other medical conditions because of the low number of cases to date, and because of the stochastic nature of some of the effects. Stochastic effects can only be measured through large epidemiological studies where enough data has been collected to remove confounding factors such as smoking habits and other lifestyle factors. The richest source of high-quality data comes from the study of Japanese atomic bomb survivors . In vitro and animal experiments are informative, but radioresistance varies greatly across species. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 in 2,000. [ 7 ] Deterministic effects are those that reliably occur above a threshold dose , and their severity increases with dose. [ 5 ] High radiation dose gives rise to deterministic effects which reliably occur above a threshold, and their severity increases with dose. Deterministic effects are not necessarily more or less serious than stochastic effects; either can ultimately lead to a temporary nuisance or a fatality. Examples of deterministic effects are: The US National Academy of Sciences Biological Effects of Ionizing Radiation Committee "has concluded that there is no compelling evidence to indicate a dose threshold below which the risk of tumor induction is zero". [ 8 ] When alpha particle emitting isotopes are ingested, they are far more dangerous than their half-life or decay rate would suggest. This is due to the high relative biological effectiveness of alpha radiation to cause biological damage after alpha-emitting radioisotopes enter living cells. Ingested alpha emitter radioisotopes such as transuranics or actinides are an average of about 20 times more dangerous, and in some experiments up to 1000 times more dangerous than an equivalent activity of beta emitting or gamma emitting radioisotopes. If the radiation type is not known, it can be determined by differential measurements in the presence of electrical fields, magnetic fields, or with varying amounts of shielding. The risk for developing radiation-induced cancer at some point in life is greater when exposing a fetus than an adult, both because the cells are more vulnerable when they are growing, and because there is much longer lifespan after the dose to develop cancer. If there is too much radiation exposure there could be harmful effects on the unborn child or reproductive organs. [ 10 ] Research shows that scanning more than once in nine months can harm the unborn child. [ 11 ] Possible deterministic effects include of radiation exposure in pregnancy include miscarriage , structural birth defects , growth restriction and intellectual disability . [ 12 ] The deterministic effects have been studied at for example survivors of the atomic bombings of Hiroshima and Nagasaki and cases where radiation therapy has been necessary during pregnancy: The intellectual deficit has been estimated to be about 25 IQ points per 1,000 mGy at 10 to 17 weeks of gestational age. [ 12 ] These effects are sometimes relevant when deciding about medical imaging in pregnancy , since projectional radiography and CT scanning exposes the fetus to radiation. Also, the risk for the mother of later acquiring radiation-induced breast cancer seems to be particularly high for radiation doses during pregnancy. [ 13 ] The human body cannot sense ionizing radiation except in very high doses, but the effects of ionization can be used to characterize the radiation. Parameters of interest include disintegration rate, particle flux, particle type, beam energy, kerma, dose rate, and radiation dose. The monitoring and calculation of doses to safeguard human health is called dosimetry and is undertaken within the science of health physics . Key measurement tools are the use of dosimeters to give the external effective dose uptake and the use of bio-assay for ingested dose. The article on the sievert summarises the recommendations of the ICRU and ICRP on the use of dose quantities and includes a guide to the effects of ionizing radiation as measured in sieverts, and gives examples of approximate figures of dose uptake in certain situations. The committed dose is a measure of the stochastic health risk due to an intake of radioactive material into the human body. The ICRP states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities. The radiation dose is determined from the intake using recommended dose coefficients". [ 14 ] The absorbed dose is a physical dose quantity D representing the mean energy imparted to matter per unit mass by ionizing radiation . In the SI system of units, the unit of measure is joules per kilogram, and its special name is gray (Gy). [ 15 ] The non-SI CGS unit rad is sometimes also used, predominantly in the USA. To represent stochastic risk the equivalent dose H T and effective dose E are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. [ 16 ] Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. These are usually in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram. The International Commission on Radiological Protection (ICRP) manages the International System of Radiological Protection, which sets recommended limits for dose uptake. Dose values may represent absorbed, equivalent, effective, or committed dose. Other important organizations studying the topic include: External exposure is exposure which occurs when the radioactive source (or other radiation source) is outside (and remains outside) the organism which is exposed. Examples of external exposure include: External exposure is relatively easy to estimate, and the irradiated organism does not become radioactive, except for a case where the radiation is an intense neutron beam which causes activation . Internal exposure occurs when the radioactive material enters the organism, and the radioactive atoms become incorporated into the organism. This can occur through inhalation, ingestion, or injection. Below are a series of examples of internal exposure. When radioactive compounds enter the human body, the effects are different from those resulting from exposure to an external radiation source. Especially in the case of alpha radiation, which normally does not penetrate the skin, the exposure can be much more damaging after ingestion or inhalation. The radiation exposure is normally expressed as a committed dose . Although radiation was discovered in late 19th century, the dangers of radioactivity and of radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when German physicist Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed, though he misattributed them to ozone, a free radical produced in air by X-rays. Other free radicals produced within the body are now understood to be more important. His injuries healed later. As a field of medical sciences, radiobiology originated from Leopold Freund 's 1896 demonstration of the therapeutic treatment of a hairy mole using the newly discovered form of electromagnetic radiation called X-rays. After irradiating frogs and insects with X-rays in early 1896, Ivan Romanovich Tarkhanov concluded that these newly discovered rays not only photograph, but also "affect the living function". [ 21 ] At the same time, Pierre and Marie Curie discovered the radioactive polonium and radium later used to treat cancer . The genetic effects of radiation, including the effects on cancer risk, were recognized much later. In 1927 Hermann Joseph Muller published research showing genetic effects, and in 1946 was awarded the Nobel prize for his findings. More generally, the 1930s saw attempts to develop a general model for radiobiology. Notable here was Douglas Lea , [ 22 ] [ 23 ] whose presentation also included an exhaustive review of some 400 supporting publications. [ 24 ] [ page needed ] [ 25 ] Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine and radioactive quackery . Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie spoke out against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died of aplastic anemia caused by radiation poisoning. Eben Byers , a famous American socialite, died of multiple cancers (but not acute radiation syndrome) in 1932 after consuming large quantities of radium over several years; his death drew public attention to dangers of radiation. By the 1930s, after a number of cases of bone necrosis and death in enthusiasts, radium-containing medical products had nearly vanished from the market. In the United States, the experience of the so-called Radium Girls , where thousands of radium-dial painters contracted oral cancers [ 26 ] — but no cases of acute radiation syndrome [ 27 ] — popularized the warnings of occupational health associated with radiation hazards. Robley D. Evans , at MIT , developed the first standard for permissible body burden of radium , a key step in the establishment of nuclear medicine as a field of study. With the development of nuclear reactors and nuclear weapons in the 1940s, heightened scientific attention was given to the study of all manner of radiation effects. The atomic bombings of Hiroshima and Nagasaki resulted in a large number of incidents of radiation poisoning, allowing for greater insight into its symptoms and dangers. Red Cross Hospital surgeon Dr. Terufumi Sasaki led intensive research into the Syndrome in the weeks and months following the Hiroshima bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, the Red Cross surgeon noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for Acute Radiation Syndrome. [ 28 ] Actress Midori Naka , who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on August 24, 1945, was the first death ever to be officially certified as a result of radiation poisoning (or "atomic bomb disease"). The Atomic Bomb Casualty Commission and the Radiation Effects Research Foundation have been monitoring the health status of the survivors and their descendants since 1946. They have found that radiation exposure increases cancer risk, but also that the average lifespan of survivors was reduced by only a few months compared to those not exposed to radiation. No health effects of any sort have thus far been detected in children of the survivors. [ 29 ] The interactions between organisms and electromagnetic fields (EMF) and ionizing radiation can be studied in a number of ways: Radiobiology experiments typically make use of a radiation source which could be:
https://en.wikipedia.org/wiki/Radiobiology
Radiocarbon is a scientific journal devoted to the topic of radiocarbon dating . It was founded in 1959 as a supplement to the American Journal of Science , and is an important source of data and information about radiocarbon dating. It publishes many radiocarbon results, and since 1979 it has published the proceedings of the international conferences on radiocarbon dating. [ 1 ] The journal is published six times per year. As of 2016, [update] it is published by Cambridge University Press . [ 2 ] [ 3 ] This article about a geophysics journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiocarbon_(journal)
Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable ). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions . This is very different from radiation chemistry where the radiation levels are kept too low to influence the chemistry. Radiochemistry includes the study of both natural and man-made radioisotopes. All radioisotopes are unstable isotopes of elements — that undergo nuclear decay and emit some form of radiation . The radiation emitted can be of several types including alpha , beta , gamma radiation , proton , and neutron emission along with neutrino and antiparticle emission decay pathways. 1. α (alpha) radiation —the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus . When this occurs, the atom's atomic mass will decrease by 4 units and the atomic number will decrease by 2. 2. β (beta) radiation —the transmutation of a neutron into an electron and a proton . After this happens, the electron is emitted from the nucleus into the electron cloud . 3. γ (gamma) radiation —the emission of electromagnetic energy (such as gamma rays ) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay . These three types of radiation can be distinguished by their difference in penetrating power. Alpha can be stopped quite easily by a few centimetres of air or a piece of paper and is equivalent to a helium nucleus. Beta can be cut off by an aluminium sheet just a few millimetres thick and are electrons. Gamma is the most penetrating of the three and is a massless chargeless high-energy photon . Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or barium -based) to reduce its intensity. By neutron irradiation of objects, it is possible to induce radioactivity; this activation of stable isotopes to create radioisotopes is the basis of neutron activation analysis . A high-energy most interesting object which has been studied in this way is the hair of Napoleon 's head, which has been examined for its arsenic content. [ 1 ] A series of different experimental methods exist, these have been designed to enable the measurement of a range of different elements in different matrices. To reduce the effect of the matrix it is common to use the chemical extraction of the wanted element and/or to allow the radioactivity due to the matrix elements to decay before the measurement of the radioactivity. Since the matrix effect can be corrected by observing the decay spectrum, little or no sample preparation is required for some samples, making neutron activation analysis less susceptible to contamination. The effects of a series of different cooling times can be seen if a hypothetical sample that contains sodium, uranium, and cobalt in a 100:10:1 ratio was subjected to a very short pulse of thermal neutrons . The initial radioactivity would be dominated by the 24 Na activity ( half-life 15 h) but with increasing time the 239 Np (half-life 2.4 d after formation from parent 239 U with half-life 24 min) and finally the 60 Co activity (5.3 yr) would predominate. One biological application is the study of DNA using radioactive phosphorus -32. In these experiments, stable phosphorus is replaced by the chemically identical radioactive P-32, and the resulting radioactivity is used in the analysis of the molecules and their behaviour. Another example is the work that was done on the methylation of elements such as sulfur , selenium , tellurium , and polonium by living organisms. It has been shown that bacteria can convert these elements into volatile compounds, [ 2 ] it is thought that methylcobalamin ( vitamin B 12 ) alkylates these elements to create the dimethyls. It has been shown that a combination of Cobaloxime and inorganic polonium in sterile water forms a volatile polonium compound, while a control experiment that did not contain the cobalt compound did not form the volatile polonium compound. [ 3 ] For the sulfur work, the isotope 35 S was used, while for polonium 207 Po was used. In some related work by the addition of 57 Co to the bacterial culture, followed by isolation of the cobalamin from the bacteria (and the measurement of the radioactivity of the isolated cobalamin) it was shown that the bacteria convert available cobalt into methylcobalamin. In medicine PET (Positron Emission Tomography) scans are commonly used for diagnostic purposes in. A radiative tracer is injected intravenously into the patient and then taken to the PET machine. The radioactive tracer releases radiation outward from the patient and the cameras in the machine interpret the radiation rays from the tracer. PET scan machines use solid state scintillation detection because of their high detection efficiency, NaI(Tl) crystals absorb the tracer's radiation and produce photons that get converted into an electrical signal for the machine to analyze. [ 4 ] Radiochemistry also includes the study of the behaviour of radioisotopes in the environment; for instance, a forest or grass fire can make radioisotopes mobile again. [ 5 ] In these experiments, fires were started in the exclusion zone around Chernobyl and the radioactivity in the air downwind was measured. It is important to note that a vast number of processes can release radioactivity into the environment, for example, the action of cosmic rays on the air is responsible for the formation of radioisotopes (such as 14 C and 32 P), the decay of 226 Ra forms 222 Rn which is a gas which can diffuse through rocks before entering buildings [ 6 ] [ 7 ] [ 8 ] and dissolve in water and thus enter drinking water [ 9 ] In addition, human activities such as bomb tests , accidents, [ 10 ] and normal releases from industry have resulted in the release of radioactivity. The environmental chemistry of some radioactive elements such as plutonium is complicated by the fact that solutions of this element can undergo disproportionation [ 11 ] and as a result, many different oxidation states can coexist at once. Some work has been done on the identification of the oxidation state and coordination number of plutonium and the other actinides under different conditions. [2] This includes work on both solutions of relatively simple complexes [ 12 ] [ 13 ] and work on colloids [ 14 ] Two of the key matrixes are soil / rocks and concrete , in these systems the chemical properties of plutonium have been studied using methods such as EXAFS and XANES . [ 15 ] [3] [4] While binding of a metal to the surfaces of the soil particles can prevent its movement through a layer of soil, it is possible for the particles of soil that bear the radioactive metal can migrate as colloidal particles through the soil. This has been shown to occur using soil particles labeled with 134 Cs, these are able to move through cracks in the soil. [ 16 ] Radioactivity is present everywhere on Earth since its formation. According to the International Atomic Energy Agency , one kilogram of soil typically contains the following amounts of the following three natural radioisotopes 370 Bq 40 K (typical range 100–700 Bq), 25 Bq 226 Ra (typical range 10–50 Bq), 25 Bq 238 U (typical range 10–50 Bq) and 25 Bq 232 Th (typical range 7–50 Bq). [ 17 ] The action of micro-organisms can fix uranium; Thermoanaerobacter can use chromium (VI), iron (III), cobalt (III), manganese (IV), and uranium(VI) as electron acceptors while acetate , glucose , hydrogen , lactate , pyruvate , succinate , and xylose can act as electron donors for the metabolism of the bacteria. In this way, the metals can be reduced to form magnetite (Fe 3 O 4 ), siderite (FeCO 3 ), rhodochrosite (MnCO 3 ), and uraninite (UO 2 ). [ 18 ] Other researchers have also worked on the fixing of uranium using bacteria [5] [6] [7] , Francis R. Livens et al. (Working at Manchester ) have suggested that the reason why Geobacter sulfurreducens can reduce UO 2+ 2 cations to uranium dioxide is that the bacteria reduce the uranyl cations to UO + 2 which then undergoes disproportionation to form UO 2+ 2 and UO 2 . This reasoning was based (at least in part) on the observation that NpO + 2 is not converted to an insoluble neptunium oxide by the bacteria. [ 19 ] Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training. [ 20 ] Nuclear and Radiochemistry (NRC) is mostly being taught at the university level, usually first at the Master- and PhD-degree level. In Europe, substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in projects funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program: The CINCH-II project - Cooperation in education and training In Nuclear Chemistry.
https://en.wikipedia.org/wiki/Radiochemistry
Radiocom 2000 was a French mobile telephone network launched in 1985, which gradually replaced the earlier analogue " public correspondence " network. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It was deployed by France Télécom Mobiles. It is classified in the category of first generation mobile networks ( 1G ). The network covered almost all mainland France . The subscriptions offered could be regional ( Île de France , Lyon Region , Provence-Alpes-Côte d'Azur ), Provincial and National. [ 2 ] Operating in the vhf frequency band , the network used digital technology for signalling and analogue modulation for voice. Frequencies were dynamically allocated according to needs. It was with Radiocom 2000 that the first concepts of cellular telephony appeared with, shortly after its launch, the appearance of the handover called "High Density Network" (capacity to change cells dynamically) and the allocation of frequencies within a cell. Faced with the growing demand of subscribers, several frequency bands were used on the Radiocom 2000 network, in particular the 200 MHz and 160 MHz bands in the Île-de-France , Lyon and Marseille regions as well as the 175 MHz band from 1990 in the north-eastern quarter of France. [ 2 ] [ 3 ] To meet the demand for additional capacity, from 1990 mobile devices became dual-band 400/900 MHz, developed by Matra , Mobitel, and Sagem as manufacturers. Alcatel and Nokia distributed these same handsets via their respective brands. [ 2 ] [ 3 ] Handsets used rechargeable nickel–cadmium batteries . The antenna was smaller than that of a terminal from the early 1980s, but the terminal was still bulky and prohibitively expensive (with a device and subscription rental). In 1988 it had 60,000 subscribers and more than 90% of the devices were installed on board vehicles . That same year, competition appeared with the birth of the Société française de radiotéléphones ( SFR ), using the NMT -F (Nordic Mobile Telephone "French") standard. [ 2 ] On 1 October 1998, a shutdown date of 31 December 1998 was confirmed for the Radiocom 2000 network. However, in the end, the shutdown was delayed by one and a half years. [ 2 ] On 28 July 2000, the Radiocom 2000 network and national subscriptions (400 MHz + 900 MHz) were closed in favour of the GSM standard. NMT-F services were also closed that day. The last subscribers to the Radiocom 2000 system were then offered a switch to the new GSM standard, on the Itinéris network of France Télécom . [ 2 ] [ 3 ] Telecommunications portal This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Radiocom_2000
In geometry , a radiodrome is a specific type of pursuit curve : the path traced by a point that continuously moves toward a target traveling in a straight line at constant speed. The term comes from the Latin radius (ray or spoke) and the Greek dromos (running or racetrack), reflecting the radial nature of the motion. The most classic and widely recognized example is the so-called dog curve , which describes the path of a dog swimming across a river toward a hare moving along the opposite bank. Because of the current, the dog must constantly adjust its heading, resulting in a longer, curved trajectory. This case was first described by the French mathematician and hydrographer Pierre Bouguer in 1732. Radiodromes are distinguished from other pursuit curves by the assumption that the pursuer always heads directly toward the target’s current position, while the target moves at a constant velocity along a straight path. Introduce a coordinate system with origin at the position of the dog at time zero and with y -axis in the direction the hare is running with the constant speed V t . The position of the hare at time zero is ( A x , A y ) with A x > 0 and at time t it is The dog runs with the constant speed V d towards the instantaneous position of the hare. The differential equation corresponding to the movement of the dog, ( x ( t ), y ( t )) , is consequently It is possible to obtain a closed-form analytic expression y = f ( x ) for the motion of the dog. From ( 2 ) and ( 3 ), it follows that Multiplying both sides with T x − x {\displaystyle T_{x}-x} and taking the derivative with respect to x , using that one gets or From this relation, it follows that where B is the constant of integration determined by the initial value of y ' at time zero, y' (0)= sinh( B − ( V t /V d ) ln A x ) , i.e., From ( 8 ) and ( 9 ), it follows after some computation that Furthermore, since y (0)=0 , it follows from ( 1 ) and ( 4 ) that If, now, V t ≠ V d , relation ( 10 ) integrates to where C is the constant of integration. Since again y (0)=0 , it's The equations ( 11 ), ( 12 ) and ( 13 ), then, together imply If V t = V d , relation ( 10 ) gives, instead, Using y (0)=0 once again, it follows that The equations ( 11 ), ( 15 ) and ( 16 ), then, together imply that If V t < V d , it follows from ( 14 ) that If V t ≥ V d , one has from ( 14 ) and ( 17 ) that lim x → A x y ( x ) = ∞ {\displaystyle \lim _{x\to A_{x}}y(x)=\infty } , which means that the hare will never be caught, whenever the chase starts.
https://en.wikipedia.org/wiki/Radiodrome
Radioecology is the branch of ecology concerning the presence of radioactivity in Earth’s ecosystems. Investigations in radioecology include field sampling, experimental field and laboratory procedures, and the development of environmentally predictive simulation models in an attempt to understand the migration methods of radioactive material throughout the environment. The practice consists of techniques from the general sciences of physics , chemistry , mathematics , biology , and ecology , coupled with applications in radiation protection. Radioecological studies provide the necessary data for dose estimation and risk assessment regarding radioactive pollution and its effects on human and environmental health. [ 1 ] Radioecologists detect and evaluate the effects of ionizing radiation and radionuclides on ecosystems, and then assess their risks and dangers. Interest and studies in the area of radioecology significantly increased in order to ascertain and manage the risks involved as a result of the Chernobyl disaster . Radioecology arose in line with increasing nuclear activities, particularly following the Second World War in response to nuclear atomic weapons testing and the use of nuclear reactors to produce electricity. Artificial radioactive affliction to Earth’s environment began with nuclear weapon testing during World War II , but did not become a prominent topic of public discussion until the 1980s. The Journal of Environmental Radioactivity (JER) was the first collection of literature on the subject, and its inception was not until 1984. [ 2 ] As demand for construction of nuclear power plants increased, it became necessary for humankind to understand how radioactive material interacts with various ecosystems in order to prevent or minimize potential damage. The aftermath of Chernobyl was the first major employment of radioecological techniques to combat radioactive pollution from a nuclear power plant. [ 3 ] [ 4 ] Collection of radioecological data from the Chernobyl disaster was performed on a private basis. Independent researchers collected data regarding the various dosage levels and geographical differences among the afflicted areas, allowing them to draw conclusions about the nature and intensity of the damage caused to ecosystems by the disaster. [ 5 ] These local studies were the best available resources in containing the effects of Chernobyl, yet the researchers themselves recommended a more cohesive effort between the neighboring countries to better anticipate and control future radioecological issues, especially considering the ongoing terrorism threats of the time and the potential use of a " dirty bomb ." [ 6 ] Japan faced similar issues when the Fukushima Daiichi nuclear disaster occurred, as its government also experienced difficulty organizing collective research efforts. An international radioecology conference was held for the first time in 2007 in Bergen, Norway . [ 7 ] European scientists from various countries had been pushing for joint efforts to combat radioactivity in the environment for three decades, but governments were hesitant to attempt this feat because of the secrecy involved in nuclear research, as technological and military developments remained competitive. [ 8 ] The aims of radioecology are to determine the concentrations of radionuclides in the environment, to understand their methods of introduction, and to outline their mechanisms of transfer within and between ecosystems. Radioecologists evaluate the effects of both natural and artificial radioactivity on the environment itself as well as dosimetrically on the human body. Radionuclides transfer between all of Earth’s various biomes, so radioecological studies are organized within three major subdivisions of the biosphere: land environments, oceanic aquatic environments, and non-oceanic aquatic environments. [ 9 ] Nuclear radiation is harmful to the environment over immediate (seconds or fractions thereof) as well as long-term (years or centuries) timescales, and it affects the environment on both microscopic ( DNA ) and macroscopic (population) levels. Degrees of these effects are dependent on external factors, especially in the case of humans. Radioecology encompasses all radiological interactions affecting biological and geological material as well as those between different phases of matter, as each is capable of carrying radionuclides. Occasionally, the origin of radionuclides in the environment is actually nature itself, as some geological sites are rich in radioactive uranium or produce radon emissions. The largest source, however, is artificial pollution via nuclear meltdowns or expulsion of radioactive waste from industrial plants. The ecosystems at risk may also be fully or partially natural. An example of a fully natural ecosystem might be a meadow or old-growth forest affected by fallout from a nuclear accident such as Chernobyl or Fukushima, while a semi-natural ecosystem might be a secondary forest , farm, reservoir, or fishery that is at risk of infection from some source of radionuclides. [ 10 ] Basic herbaceous or bivalve species such as mosses, lichens, clams, and mussels are often the first organisms affected by fallout in an ecosystem, [ 11 ] as they are in closest proximity to the abiotic sources of radionuclides (atmospheric, geological, or aquatic transfer). These organisms often possess the highest measurable concentrations of radionuclides, making them ideal bioindicators for sampling radioactivity in ecosystems. In the absence of sufficient data, radioecologists must often rely on analogs of a radionuclide to attempt to evaluate or hypothesize about certain ecotoxicological or metabolic effects of rarer radionuclides. In general, techniques in radioecology focus on the study of environmental bioelectromagnetism , bioelectrochemistry , electromagnetic pollution , and isotope analysis . Earth in the 21st century is at risk of the accumulation of nuclear waste as well as the potentiality of nuclear terrorism , which could both lead to leaks. Radioactivity originating from the Northern Hemisphere [ 12 ] is observable dating back to the mid-20th century. Some highly toxic radionuclides have particularly long radioactive half-lives (up to as many as millions of years in some cases [ 2 ] ), meaning they will virtually never disappear on their own. The impact of these radionuclides on biological material (correlated with their radioactivity and toxicity) is similar to that of other environmental toxins, making them difficult to trace within plants and animals. [ 2 ] Some aging nuclear facilities were not originally intended to operate as long as they have, and the consequences of their waste procedures were not well understood when they were built. One example of this is how the radionuclide tritium is sometimes released into the surrounding environment as a result of nuclear reprocessing , as this was not a foreseen complication in the original waste management orders of operations. It is difficult to diverge from these procedures once a reactor has already been put to use, since any change either risks releasing even more radioactive material or jeopardizes the safety of the individuals working on the disposal. Protection of human well-being has been, and remains to this day, paramount in the aims of radioecological research and risk assessment. Radioecology often calls into question the ethics of protecting human health versus the preservation of the environment in the interest of fighting extinction of other species, [ 13 ] but public opinion on this matter is shifting. [ 14 ]
https://en.wikipedia.org/wiki/Radioecology
Radiofacsimile , radiofax or HF fax is an analogue mode for transmitting grayscale images via high frequency (HF) radio waves . It was the predecessor to slow-scan television (SSTV). It was the primary method of sending photographs from remote sites (especially islands) from the 1930s to the early 1970s. It is still in limited use for transmitting weather charts and information to ships at sea. Richard H. Ranger , an electrical engineer working at Radio Corporation of America (RCA), invented a method for sending photographs through radio transmissions. He called his system the wireless photoradiogram, in contrast to the fifty-year-old telefacsimile devices which used first telegraphic wires, and then later was adapted to use the newer telephone wires. On 29 November 1924, Ranger's system was used to send a photograph from New York City to London. It was an image of President Calvin Coolidge and was the first transoceanic radio transmission of a photograph. Also that year, AT&T engineer Herbert E. Ives transmitted the first color photograph. [ 2 ] Charles J. Young, son of the RCA founder Owen D. Young , and Ernst Alexanderson , developed a radio facsimile system for General Electric . On 12 August 1931 this system successfully transmitted a copy of the Union-Star newspaper of Schenectady, New York to the transatlantic liners America and Minnekahda . It took 15 minutes to copy a single page measuring 8 + 1 ⁄ 2 by 9 inches (220 by 230 mm). [ 3 ] Beginning in the late 1930s, the Finch Facsimile system was used to transmit a "radio newspaper" to private homes via commercial AM radio stations and ordinary radio receivers equipped with Finch's printer, which used thermal paper. Sensing a new and potentially golden opportunity, competitors soon entered the field, but the printer and special paper were expensive luxuries, AM radio transmission was very slow and vulnerable to static, and the newspaper was too small. After more than ten years of repeated attempts by Finch and others to establish such a service as a viable business, the public, apparently quite content with its cheaper and much more substantial home-delivered daily newspapers, and with conventional spoken radio bulletins to provide any "hot" news, still showed only a passing curiosity about the new medium. [ 4 ] During World War II thousands of photographs were transmitted from Europe, and from the Pacific Islands, to the United States. The major news agencies ( AP , UPI , Reuters ), maintained their own transoceanic radio facsimile transmitters as close to the action as they could. The iconic flag raising on Iwo Jima was printed in hundreds of American newspapers within a day of being taken, because it was transmitted from Guam to New York City by wireless radiofacsimile, a distance of 12,781 km (7,942 mi). [ 5 ] [ better source needed ] By the late 1940s, radiofax receivers were sufficiently miniaturized to be fitted beneath the dashboard of Western Union 's "Telecar" telegram delivery vehicles. [ 6 ] In the 1960s, the United States Army transmitted the first photograph via satellite facsimile to Puerto Rico from the Deal Test Site using the Courier satellite . A decade after the introduction of radiofax National Weather Service (NWS) began transmitting weather maps using the radiofax technology. The NWS named this new service weatherfax ( portmanteau word from the words " weather facsimile ") The cover of the regular NOAA publication on frequencies and schedules states "Worldwide Marine Radiofacsimile Broadcast Schedules". Facsimile machines were used in the 1950s to transmit weather charts across the United States via land-lines first and then internationally via HF radio . Radio transmission of weather charts provides an enormous amount of flexibility to marine and aviation users for they now have the latest weather information and forecasts at their fingertips to use in the planning of voyages. Radiofax relies on facsimile technology where printed information is scanned line by line and encoded into an electrical signal which can then be transmitted via physical line or radio waves to remote locations. Since the amount of information transmitted per unit time is directly proportional to the bandwidth available, then the speed at which a weather chart can be transmitted will vary depending on the quality of the media used for transmission. Today radiofax data is available via FTP downloads from sites in the Internet such as the ones hosted by the National Oceanic and Atmospheric Administration (NOAA). Radiofax transmissions are also broadcast by NOAA from multiple sites in the country at regular daily schedules. Radio weatherfax transmissions are particularly useful to shipping, where there are limited facilities for accessing the Internet. The term weatherfax was coined after the technology that allows the transmission and reception of weather charts ( surface analysis , forecasts, and others) from a transmission site (usually the meteorological office) to a remote site (where the actual users are). Radiofax may also be used to transmit pages of newspapers . Stations like JJC use this way of transmitting news by using radio facsimile technology. Radiofax is transmitted in single sideband which is a refinement of amplitude modulation . The signal shifts up or down a given amount to designate white or black pixels. A deviation less than that for a white or black pixel is taken to be a shade of grey. With correct tuning (1.9 kHz below the assigned frequency for USB, above for LSB), the signal shares some characteristics with SSTV , with black at 1.5 kHz and peak white at 2.3 kHz. Usually, 120 lines per minute (LPM) are sent (For monochrome fax, possible values are: 60, 90, 100, 120, 180, 240. For colour fax, LPM can be: 120, 240 [ 7 ] ). A value known as the index of cooperation (IOC) must also be known to decode a radio fax transmission - this governs the image resolution, and derives from early radio fax machines which used drum readers, and is the product of the total line length and the number of lines per unit length (known sometimes as the factor of cooperation ), divided by π . Usually the IOC is 576. APT format permits unattended monitoring of services. It is employed by most terrestrial weather facsimile stations as well as geostationary weather satellites. Today, radiofax is primarily used worldwide for the dissemination of weather charts, satellite weather images, and forecasts to ships at sea. The oceans are covered by coastal stations in various countries. In the United States, fax weather products are prepared by a number of offices, branches, and agencies within the National Weather Service (NWS) of the National Oceanic and Atmospheric Administration (NOAA). Tropical and hurricane products come from the Tropical Analysis and Forecast Branch , part of the Tropical Prediction Center/National Hurricane Center . They are broadcast over US Coast Guard communication stations NMG , in New Orleans , LA, and NMC, the Pacific master station on Point Reyes, California. After Hurricane Katrina damaged NMG, the Boston Coast Guard station NMF added a limited schedule of tropical warning charts. NMG is back at full capability, but NMF continues to broadcast these. All other products come from the Ocean Prediction Center (OPC) of the NWS, in cooperation with several other offices depending on the region and nature of information. These also use NMG, NMC, and NMF, plus Coast Guard station NOJ in Kodiak, Alaska , and Department of Defense station KVM70 in Hawaii . Ever since the loss of the RMS Titanic highlighted the dangers of icebergs in the North Atlantic, an International Ice Patrol has also originated weather data, and its charts are broadcast by the Boston station during the prime iceberg season of February through September, using the call sign NIK. CBV, Playa Ancha Radio in Valparaiso , Chile broadcasts a daily schedule of Armada de Chile weather fax for the southeastern Pacific, all the way to the Antarctic. Also in the Pacific, Japan has two stations, as does the Bureau of Meteorology in Australia . Most European countries have stations, as does Russia . Kyodo News is the only remaining news agency to transmit news via radiofax. It broadcasts complete newspapers in Japanese and English, often at 60 lines per minute instead of the more normal 120 because of the greater complexity of written Japanese. A full day's news takes dozens of minutes to transmit. Kyodo has a dedicated transmission to Pacific fishing fleets from Kagoshima Prefectural Fishery Radio, and a relay from 9VF/252, which is said to be located in Singapore . These transmitters are considerably more powerful than others used for this mode. The German Meteorological Service (Deutscher Wetterdienst, DWD) transmits a regular daily schedule of weather charts on three frequencies 3.855 MHz, 7.88 MHz and 13.8825 MHz from their LF and HF transmitting facility in Pinneberg .
https://en.wikipedia.org/wiki/Radiofax
Radiofrequency Echographic Multi Spectrometry (REMS) is a non-ionizing technology for osteoporosis diagnosis and for fracture risk assessment. REMS processes the raw, unfiltered ultrasound signals acquired during an echographic scan of the axial sites, femur and spine . The analysis is performed in the frequency domain . Bone mineral density ( BMD ) is estimated by comparing the results against reference models. The accuracy has been tested by comparing it against to DXA technology . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Traditionally, ultrasound B-Mode imaging has been designed for allowing a visual evaluation of human organs and their features by clinicians; however, this implies that the huge quantity of information carried by ultrasound signals is processed and significantly reduced for visualization purposes. REMS technology instead analyses the raw, unfiltered ultrasound signals by comparing their spectral representation with the spectral models stored in a proprietary database which has been previously obtained from healthy and osteoporotic patients; these models are specific and vary with sex, age, BMI and skeletal site. The comparison allows the BMD estimation of the patient [ 5 ] [ 6 ] as well as a both fast and reliable diagnostic classification, compliant to the recommendations and diagnostic criteria defined by the World Health Organization . REMS scans on femur and spine last 40 and 80 seconds, respectively, allowing the acquisition of several thousands of ultrasound signals related to the skeletal site under examination. The patented algorithm (see [ 5 ] [ 6 ] for more details) automatically processes these signals on the basis of their spectral features; each signal can be classified as reliable and included in the pipeline for the computation of the diagnostic parameters or, alternatively, classified as unreliable and discarded. During the analysis phase, the acquired spectra are compared to the spectral models stored in the database; afterwards, the values obtained by each comparison are averaged, leading to a precise and repeatable estimation of the diagnostic parameters of interest. If substantial differences are detected between one or more acquired signals and the reference spectral models, these samples are identified, classified as unreliable and automatically discarded: for instance, spectra which are not clearly associated to bone portions but to osteophytes or calcifications . Hence, this approach natively identifies and eliminates outliers , bringing significant advantages with respect to the clinical reliability of the obtained results. [ 7 ] [ 8 ] [ 9 ] REMS technology performance has been evaluated through multicentre clinical studies. [ 1 ] [ 10 ] The work of Di Paola et al. [ 1 ] has investigated precision and diagnostic accuracy of REMS in comparison with DXA on a sample of 2000 patients. A very high correlation has been observed between the T-Score values obtained by both technologies ( Pearson correlation coefficient > 0.93; Cohen’s Kappa equals to 0.82 for lumbar spine and 0.79 for femoral neck) as well as a very low average BMD difference between the two techniques (mean ± 2 standard deviations): −0.004±0.088 g/cm 2 for lumbar spine and −0.006±0.076 g/cm 2 for femoral neck. Furthermore, specificity and sensitivity of REMS in the discrimination between osteoporotic and non-osteoporotic patients has been evaluated: sensitivity and specificity exceed 91% for both skeletal sites. Additional outcomes of this study are the values of precision and repeatability of REMS estimates, assessed using the Root Mean Square Coefficient of Variation (CV-RMS): precision has been evaluated as being 0.38% for lumbar spine and 0.32% for femoral neck, whereas the Least Significant Change (LSC) resulted in 1.05% and 0.88%, respectively. Finally, inter-operator repeatability has been calculated, which has resulted in 0.54% for lumbar spine and 0.48% for femoral neck. These values are significantly lower than those reported about DXA in the scientific literature [ 11 ] and offer concrete advantages from the point of view of short-term follow-up of patients undergoing therapeutic treatments. [ 12 ] [ 13 ] Observational longitudinal studies have further evaluated REMS T-score performance in the identification of patients at risk for fragility fracture. [ 1 ] [ 2 ] Specifically, in Adami et al., [ 1 ] a group of more than 1.500 patients has undergone both DXA and REMS scans. Afterwards, these patients have been monitored for a period up to 5 years in order to estimate the incidence of fragility fractures in relationship with the T-score values previously obtained with both technologies. The study has demonstrated that REMS T-score is an effective parameter for the prediction of the occurrence of fragility fractures, leading the authors to positive conclusions about the effectiveness of REMS technology in the identification of patients at risk for osteoporotic fracture. As widely reported in the scientific literature, [ 14 ] bone density is just one of the components of bone strength thus it only partially predicts bone fragility. In order to overcome this limitation, a novel parameter, Fragility Score, has been developed. Fragility Score evaluates bone microstructural features independently from BMD and it is based on the assumption that a fragile bone structure has microstructural features which, in turn, influence the spectral characteristics of the acquired ultrasound signal, being different from those reflecting a robust bone structure. Fragility Score is an adimensional parameter, ranging from 0 to 100, obtained by comparing the spectra of the acquired ultrasound signals with the spectral reference models obtained from patients who did, or did not, developed an osteoporotic fracture. This parameter has been validated through clinical studies and its accuracy has demonstrated a performance similar to DXA BMD. [ 15 ] [ 16 ] In a recent publication, REMS technology has received the attention of the European Society for Clinical and Economic Aspects of Osteoporosis, Osteoarthritis and Musculoskeletal Diseases ( ESCEO ). In this work, all the available technologies for bone strength assessment and fracture risk estimation have been reviewed and discussed in relation to the clinical needs currently unmet. [ 14 ] In this context, REMS has been considered a valuable approach for osteoporosis diagnosis and for fracture risk assessment, at the same time overcoming several of the current limitations acknowledged for the currently available bone health assessment technologies. One example is the work of Degennaro et al. [ 17 ] in which a significant BMD reduction has been detected in pregnant women compared to non-pregnant women for the very first time. Several international working groups has used REMS technology for research purposes: Bojincă et al. [ 18 ] has proven the effectiveness of REMS BMD estimates in patients affected by rheumatoid arthritis . Kirilova et al. [ 19 ] assessed the values of lumbar spine and hip REMS-based BMD in premenopausal and postmenopausal women. In Khu et al. [ 20 ] REMS has been used for characterizing the relationship between body mass index and bone health. The growing interest towards REMS is also demonstrated by the publication of scientific review papers focused on this technology. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Radiofrequency_Echographic_Multi_Spectrometry